00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 2444 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3705 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.113 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.113 The recommended git tool is: git 00:00:00.114 using credential 00000000-0000-0000-0000-000000000002 00:00:00.116 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.162 Fetching changes from the remote Git repository 00:00:00.165 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.207 Using shallow fetch with depth 1 00:00:00.207 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.207 > git --version # timeout=10 00:00:00.237 > git --version # 'git version 2.39.2' 00:00:00.237 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.252 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.252 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.675 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.688 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.701 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.701 > git config core.sparsecheckout # timeout=10 00:00:07.715 > git read-tree -mu HEAD # timeout=10 00:00:07.731 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.755 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.755 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.840 [Pipeline] Start of Pipeline 00:00:07.853 [Pipeline] library 00:00:07.855 Loading library shm_lib@master 00:00:07.855 Library shm_lib@master is cached. Copying from home. 00:00:07.871 [Pipeline] node 00:00:07.881 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.882 [Pipeline] { 00:00:07.890 [Pipeline] catchError 00:00:07.891 [Pipeline] { 00:00:07.902 [Pipeline] wrap 00:00:07.908 [Pipeline] { 00:00:07.915 [Pipeline] stage 00:00:07.917 [Pipeline] { (Prologue) 00:00:08.111 [Pipeline] sh 00:00:08.397 + logger -p user.info -t JENKINS-CI 00:00:08.417 [Pipeline] echo 00:00:08.419 Node: CYP12 00:00:08.427 [Pipeline] sh 00:00:08.757 [Pipeline] setCustomBuildProperty 00:00:08.767 [Pipeline] echo 00:00:08.769 Cleanup processes 00:00:08.775 [Pipeline] sh 00:00:09.066 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.066 1496191 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.079 [Pipeline] sh 00:00:09.365 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.365 ++ grep -v 'sudo pgrep' 00:00:09.365 ++ awk '{print $1}' 00:00:09.365 + sudo kill -9 00:00:09.365 + true 00:00:09.381 [Pipeline] cleanWs 00:00:09.391 [WS-CLEANUP] Deleting project workspace... 00:00:09.391 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.398 [WS-CLEANUP] done 00:00:09.402 [Pipeline] setCustomBuildProperty 00:00:09.417 [Pipeline] sh 00:00:09.704 + sudo git config --global --replace-all safe.directory '*' 00:00:09.834 [Pipeline] httpRequest 00:00:10.158 [Pipeline] echo 00:00:10.160 Sorcerer 10.211.164.20 is alive 00:00:10.170 [Pipeline] retry 00:00:10.173 [Pipeline] { 00:00:10.187 [Pipeline] httpRequest 00:00:10.192 HttpMethod: GET 00:00:10.193 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.194 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.217 Response Code: HTTP/1.1 200 OK 00:00:10.217 Success: Status code 200 is in the accepted range: 200,404 00:00:10.218 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.227 [Pipeline] } 00:00:13.244 [Pipeline] // retry 00:00:13.252 [Pipeline] sh 00:00:13.542 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.562 [Pipeline] httpRequest 00:00:13.945 [Pipeline] echo 00:00:13.947 Sorcerer 10.211.164.20 is alive 00:00:13.957 [Pipeline] retry 00:00:13.960 [Pipeline] { 00:00:13.976 [Pipeline] httpRequest 00:00:13.981 HttpMethod: GET 00:00:13.982 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:13.982 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:13.987 Response Code: HTTP/1.1 200 OK 00:00:13.988 Success: Status code 200 is in the accepted range: 200,404 00:00:13.988 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:32.549 [Pipeline] } 00:00:32.563 [Pipeline] // retry 00:00:32.569 [Pipeline] sh 00:00:32.858 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:36.170 [Pipeline] sh 00:00:36.456 + git -C spdk log --oneline -n5 00:00:36.457 c13c99a5e test: Various fixes for Fedora40 00:00:36.457 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:00:36.457 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:00:36.457 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:00:36.457 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:00:36.468 [Pipeline] } 00:00:36.480 [Pipeline] // stage 00:00:36.488 [Pipeline] stage 00:00:36.490 [Pipeline] { (Prepare) 00:00:36.504 [Pipeline] writeFile 00:00:36.517 [Pipeline] sh 00:00:36.801 + logger -p user.info -t JENKINS-CI 00:00:36.814 [Pipeline] sh 00:00:37.101 + logger -p user.info -t JENKINS-CI 00:00:37.114 [Pipeline] sh 00:00:37.403 + cat autorun-spdk.conf 00:00:37.403 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:37.403 SPDK_TEST_NVMF=1 00:00:37.403 SPDK_TEST_NVME_CLI=1 00:00:37.403 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:37.403 SPDK_TEST_NVMF_NICS=e810 00:00:37.403 SPDK_RUN_UBSAN=1 00:00:37.403 NET_TYPE=phy 00:00:37.412 RUN_NIGHTLY=1 00:00:37.416 [Pipeline] readFile 00:00:37.433 [Pipeline] withEnv 00:00:37.435 [Pipeline] { 00:00:37.445 [Pipeline] sh 00:00:37.733 + set -ex 00:00:37.733 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:37.733 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:37.733 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:37.733 ++ SPDK_TEST_NVMF=1 00:00:37.733 ++ SPDK_TEST_NVME_CLI=1 00:00:37.733 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:37.733 ++ SPDK_TEST_NVMF_NICS=e810 00:00:37.733 ++ SPDK_RUN_UBSAN=1 00:00:37.733 ++ NET_TYPE=phy 00:00:37.733 ++ RUN_NIGHTLY=1 00:00:37.733 + case $SPDK_TEST_NVMF_NICS in 00:00:37.733 + DRIVERS=ice 00:00:37.733 + [[ tcp == \r\d\m\a ]] 00:00:37.733 + [[ -n ice ]] 00:00:37.733 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:37.733 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:37.733 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:37.733 rmmod: ERROR: Module irdma is not currently loaded 00:00:37.733 rmmod: ERROR: Module i40iw is not currently loaded 00:00:37.733 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:37.733 + true 00:00:37.733 + for D in $DRIVERS 00:00:37.733 + sudo modprobe ice 00:00:37.733 + exit 0 00:00:37.744 [Pipeline] } 00:00:37.757 [Pipeline] // withEnv 00:00:37.761 [Pipeline] } 00:00:37.772 [Pipeline] // stage 00:00:37.779 [Pipeline] catchError 00:00:37.781 [Pipeline] { 00:00:37.791 [Pipeline] timeout 00:00:37.791 Timeout set to expire in 1 hr 0 min 00:00:37.792 [Pipeline] { 00:00:37.802 [Pipeline] stage 00:00:37.804 [Pipeline] { (Tests) 00:00:37.815 [Pipeline] sh 00:00:38.101 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:38.101 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:38.101 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:38.101 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:38.101 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:38.102 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:38.102 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:38.102 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:38.102 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:38.102 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:38.102 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:38.102 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:38.102 + source /etc/os-release 00:00:38.102 ++ NAME='Fedora Linux' 00:00:38.102 ++ VERSION='39 (Cloud Edition)' 00:00:38.102 ++ ID=fedora 00:00:38.102 ++ VERSION_ID=39 00:00:38.102 ++ VERSION_CODENAME= 00:00:38.102 ++ PLATFORM_ID=platform:f39 00:00:38.102 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:38.102 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:38.102 ++ LOGO=fedora-logo-icon 00:00:38.102 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:38.102 ++ HOME_URL=https://fedoraproject.org/ 00:00:38.102 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:38.102 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:38.102 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:38.102 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:38.102 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:38.102 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:38.102 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:38.102 ++ SUPPORT_END=2024-11-12 00:00:38.102 ++ VARIANT='Cloud Edition' 00:00:38.102 ++ VARIANT_ID=cloud 00:00:38.102 + uname -a 00:00:38.102 Linux spdk-cyp-12 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:38.102 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:41.406 Hugepages 00:00:41.406 node hugesize free / total 00:00:41.406 node0 1048576kB 0 / 0 00:00:41.406 node0 2048kB 0 / 0 00:00:41.406 node1 1048576kB 0 / 0 00:00:41.406 node1 2048kB 0 / 0 00:00:41.406 00:00:41.406 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:41.406 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:00:41.406 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:00:41.406 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:00:41.406 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:00:41.406 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:00:41.406 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:00:41.406 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:00:41.406 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:00:41.406 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:00:41.406 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:00:41.406 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:00:41.406 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:00:41.406 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:00:41.406 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:00:41.406 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:00:41.406 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:00:41.406 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:00:41.406 + rm -f /tmp/spdk-ld-path 00:00:41.406 + source autorun-spdk.conf 00:00:41.406 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:41.406 ++ SPDK_TEST_NVMF=1 00:00:41.406 ++ SPDK_TEST_NVME_CLI=1 00:00:41.406 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:41.406 ++ SPDK_TEST_NVMF_NICS=e810 00:00:41.406 ++ SPDK_RUN_UBSAN=1 00:00:41.406 ++ NET_TYPE=phy 00:00:41.406 ++ RUN_NIGHTLY=1 00:00:41.406 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:41.406 + [[ -n '' ]] 00:00:41.406 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:41.406 + for M in /var/spdk/build-*-manifest.txt 00:00:41.406 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:00:41.406 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:41.406 + for M in /var/spdk/build-*-manifest.txt 00:00:41.406 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:41.406 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:41.406 + for M in /var/spdk/build-*-manifest.txt 00:00:41.406 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:41.406 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:41.406 ++ uname 00:00:41.406 + [[ Linux == \L\i\n\u\x ]] 00:00:41.406 + sudo dmesg -T 00:00:41.406 + sudo dmesg --clear 00:00:41.406 + dmesg_pid=1497179 00:00:41.406 + [[ Fedora Linux == FreeBSD ]] 00:00:41.406 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:41.406 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:41.406 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:41.406 + [[ -x /usr/src/fio-static/fio ]] 00:00:41.406 + export FIO_BIN=/usr/src/fio-static/fio 00:00:41.406 + FIO_BIN=/usr/src/fio-static/fio 00:00:41.406 + sudo dmesg -Tw 00:00:41.406 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:41.406 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:41.406 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:41.406 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:41.406 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:41.407 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:41.407 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:41.407 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:41.407 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:41.407 Test configuration: 00:00:41.407 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:41.407 SPDK_TEST_NVMF=1 00:00:41.407 SPDK_TEST_NVME_CLI=1 00:00:41.407 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:41.407 SPDK_TEST_NVMF_NICS=e810 00:00:41.407 SPDK_RUN_UBSAN=1 00:00:41.407 NET_TYPE=phy 00:00:41.407 RUN_NIGHTLY=1 05:15:44 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:00:41.407 05:15:44 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:41.407 05:15:44 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:41.407 05:15:44 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:41.407 05:15:44 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:41.407 05:15:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:41.407 05:15:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:41.407 05:15:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:41.407 05:15:44 -- paths/export.sh@5 -- $ export PATH 00:00:41.407 05:15:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:41.407 05:15:44 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:41.407 05:15:44 -- common/autobuild_common.sh@440 -- $ date +%s 00:00:41.407 05:15:44 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733544944.XXXXXX 00:00:41.407 05:15:44 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733544944.daiJyU 00:00:41.407 05:15:44 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:00:41.407 05:15:44 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:00:41.407 05:15:44 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:41.407 05:15:44 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:41.407 05:15:44 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:41.407 05:15:44 -- common/autobuild_common.sh@456 -- $ get_config_params 00:00:41.407 05:15:44 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:00:41.407 05:15:44 -- common/autotest_common.sh@10 -- $ set +x 00:00:41.668 05:15:44 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:00:41.668 05:15:44 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:41.668 05:15:44 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:41.668 05:15:44 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:41.668 05:15:44 -- spdk/autobuild.sh@16 -- $ date -u 00:00:41.668 Sat Dec 7 04:15:44 AM UTC 2024 00:00:41.668 05:15:44 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:41.668 LTS-67-gc13c99a5e 00:00:41.668 05:15:44 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:41.668 05:15:44 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:41.668 05:15:44 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:41.668 05:15:44 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:41.668 05:15:44 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:41.668 05:15:44 -- common/autotest_common.sh@10 -- $ set +x 00:00:41.668 ************************************ 00:00:41.668 START TEST ubsan 00:00:41.668 ************************************ 00:00:41.668 05:15:44 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:00:41.668 using ubsan 00:00:41.668 00:00:41.668 real 0m0.000s 00:00:41.668 user 0m0.000s 00:00:41.668 sys 0m0.000s 00:00:41.668 05:15:44 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:00:41.668 05:15:44 -- common/autotest_common.sh@10 -- $ set +x 00:00:41.668 ************************************ 00:00:41.668 END TEST ubsan 00:00:41.668 ************************************ 00:00:41.668 05:15:44 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:41.668 05:15:44 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:41.668 05:15:44 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:41.668 05:15:44 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:41.668 05:15:44 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:41.668 05:15:44 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:41.668 05:15:44 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:41.668 05:15:44 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:41.669 05:15:44 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:00:41.669 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:41.669 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:42.240 Using 'verbs' RDMA provider 00:00:57.829 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:10.065 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:10.065 Creating mk/config.mk...done. 00:01:10.065 Creating mk/cc.flags.mk...done. 00:01:10.065 Type 'make' to build. 00:01:10.065 05:16:12 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:10.065 05:16:12 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:10.065 05:16:12 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:10.065 05:16:12 -- common/autotest_common.sh@10 -- $ set +x 00:01:10.065 ************************************ 00:01:10.065 START TEST make 00:01:10.065 ************************************ 00:01:10.065 05:16:12 -- common/autotest_common.sh@1114 -- $ make -j144 00:01:10.065 make[1]: Nothing to be done for 'all'. 00:01:18.205 The Meson build system 00:01:18.205 Version: 1.5.0 00:01:18.205 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:18.205 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:18.205 Build type: native build 00:01:18.205 Program cat found: YES (/usr/bin/cat) 00:01:18.205 Project name: DPDK 00:01:18.205 Project version: 23.11.0 00:01:18.205 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:18.205 C linker for the host machine: cc ld.bfd 2.40-14 00:01:18.205 Host machine cpu family: x86_64 00:01:18.205 Host machine cpu: x86_64 00:01:18.205 Message: ## Building in Developer Mode ## 00:01:18.205 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:18.205 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:18.205 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:18.205 Program python3 found: YES (/usr/bin/python3) 00:01:18.205 Program cat found: YES (/usr/bin/cat) 00:01:18.205 Compiler for C supports arguments -march=native: YES 00:01:18.205 Checking for size of "void *" : 8 00:01:18.205 Checking for size of "void *" : 8 (cached) 00:01:18.205 Library m found: YES 00:01:18.205 Library numa found: YES 00:01:18.205 Has header "numaif.h" : YES 00:01:18.205 Library fdt found: NO 00:01:18.205 Library execinfo found: NO 00:01:18.205 Has header "execinfo.h" : YES 00:01:18.205 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:18.205 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:18.205 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:18.205 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:18.205 Run-time dependency openssl found: YES 3.1.1 00:01:18.205 Run-time dependency libpcap found: YES 1.10.4 00:01:18.205 Has header "pcap.h" with dependency libpcap: YES 00:01:18.205 Compiler for C supports arguments -Wcast-qual: YES 00:01:18.205 Compiler for C supports arguments -Wdeprecated: YES 00:01:18.205 Compiler for C supports arguments -Wformat: YES 00:01:18.205 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:18.205 Compiler for C supports arguments -Wformat-security: NO 00:01:18.205 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:18.205 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:18.205 Compiler for C supports arguments -Wnested-externs: YES 00:01:18.205 Compiler for C supports arguments -Wold-style-definition: YES 00:01:18.205 Compiler for C supports arguments -Wpointer-arith: YES 00:01:18.205 Compiler for C supports arguments -Wsign-compare: YES 00:01:18.205 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:18.205 Compiler for C supports arguments -Wundef: YES 00:01:18.205 Compiler for C supports arguments -Wwrite-strings: YES 00:01:18.205 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:18.205 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:18.205 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:18.205 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:18.205 Program objdump found: YES (/usr/bin/objdump) 00:01:18.205 Compiler for C supports arguments -mavx512f: YES 00:01:18.205 Checking if "AVX512 checking" compiles: YES 00:01:18.205 Fetching value of define "__SSE4_2__" : 1 00:01:18.205 Fetching value of define "__AES__" : 1 00:01:18.205 Fetching value of define "__AVX__" : 1 00:01:18.205 Fetching value of define "__AVX2__" : 1 00:01:18.205 Fetching value of define "__AVX512BW__" : 1 00:01:18.205 Fetching value of define "__AVX512CD__" : 1 00:01:18.205 Fetching value of define "__AVX512DQ__" : 1 00:01:18.205 Fetching value of define "__AVX512F__" : 1 00:01:18.205 Fetching value of define "__AVX512VL__" : 1 00:01:18.205 Fetching value of define "__PCLMUL__" : 1 00:01:18.205 Fetching value of define "__RDRND__" : 1 00:01:18.205 Fetching value of define "__RDSEED__" : 1 00:01:18.205 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:18.205 Fetching value of define "__znver1__" : (undefined) 00:01:18.205 Fetching value of define "__znver2__" : (undefined) 00:01:18.205 Fetching value of define "__znver3__" : (undefined) 00:01:18.205 Fetching value of define "__znver4__" : (undefined) 00:01:18.205 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:18.205 Message: lib/log: Defining dependency "log" 00:01:18.205 Message: lib/kvargs: Defining dependency "kvargs" 00:01:18.205 Message: lib/telemetry: Defining dependency "telemetry" 00:01:18.205 Checking for function "getentropy" : NO 00:01:18.205 Message: lib/eal: Defining dependency "eal" 00:01:18.205 Message: lib/ring: Defining dependency "ring" 00:01:18.205 Message: lib/rcu: Defining dependency "rcu" 00:01:18.205 Message: lib/mempool: Defining dependency "mempool" 00:01:18.205 Message: lib/mbuf: Defining dependency "mbuf" 00:01:18.205 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:18.206 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:18.206 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:18.206 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:18.206 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:18.206 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:18.206 Compiler for C supports arguments -mpclmul: YES 00:01:18.206 Compiler for C supports arguments -maes: YES 00:01:18.206 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:18.206 Compiler for C supports arguments -mavx512bw: YES 00:01:18.206 Compiler for C supports arguments -mavx512dq: YES 00:01:18.206 Compiler for C supports arguments -mavx512vl: YES 00:01:18.206 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:18.206 Compiler for C supports arguments -mavx2: YES 00:01:18.206 Compiler for C supports arguments -mavx: YES 00:01:18.206 Message: lib/net: Defining dependency "net" 00:01:18.206 Message: lib/meter: Defining dependency "meter" 00:01:18.206 Message: lib/ethdev: Defining dependency "ethdev" 00:01:18.206 Message: lib/pci: Defining dependency "pci" 00:01:18.206 Message: lib/cmdline: Defining dependency "cmdline" 00:01:18.206 Message: lib/hash: Defining dependency "hash" 00:01:18.206 Message: lib/timer: Defining dependency "timer" 00:01:18.206 Message: lib/compressdev: Defining dependency "compressdev" 00:01:18.206 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:18.206 Message: lib/dmadev: Defining dependency "dmadev" 00:01:18.206 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:18.206 Message: lib/power: Defining dependency "power" 00:01:18.206 Message: lib/reorder: Defining dependency "reorder" 00:01:18.206 Message: lib/security: Defining dependency "security" 00:01:18.206 Has header "linux/userfaultfd.h" : YES 00:01:18.206 Has header "linux/vduse.h" : YES 00:01:18.206 Message: lib/vhost: Defining dependency "vhost" 00:01:18.206 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:18.206 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:18.206 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:18.206 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:18.206 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:18.206 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:18.206 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:18.206 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:18.206 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:18.206 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:18.206 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:18.206 Configuring doxy-api-html.conf using configuration 00:01:18.206 Configuring doxy-api-man.conf using configuration 00:01:18.206 Program mandb found: YES (/usr/bin/mandb) 00:01:18.206 Program sphinx-build found: NO 00:01:18.206 Configuring rte_build_config.h using configuration 00:01:18.206 Message: 00:01:18.206 ================= 00:01:18.206 Applications Enabled 00:01:18.206 ================= 00:01:18.206 00:01:18.206 apps: 00:01:18.206 00:01:18.206 00:01:18.206 Message: 00:01:18.206 ================= 00:01:18.206 Libraries Enabled 00:01:18.206 ================= 00:01:18.206 00:01:18.206 libs: 00:01:18.206 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:18.206 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:18.206 cryptodev, dmadev, power, reorder, security, vhost, 00:01:18.206 00:01:18.206 Message: 00:01:18.206 =============== 00:01:18.206 Drivers Enabled 00:01:18.206 =============== 00:01:18.206 00:01:18.206 common: 00:01:18.206 00:01:18.206 bus: 00:01:18.206 pci, vdev, 00:01:18.206 mempool: 00:01:18.206 ring, 00:01:18.206 dma: 00:01:18.206 00:01:18.206 net: 00:01:18.206 00:01:18.206 crypto: 00:01:18.206 00:01:18.206 compress: 00:01:18.206 00:01:18.206 vdpa: 00:01:18.206 00:01:18.206 00:01:18.206 Message: 00:01:18.206 ================= 00:01:18.206 Content Skipped 00:01:18.206 ================= 00:01:18.206 00:01:18.206 apps: 00:01:18.206 dumpcap: explicitly disabled via build config 00:01:18.206 graph: explicitly disabled via build config 00:01:18.206 pdump: explicitly disabled via build config 00:01:18.206 proc-info: explicitly disabled via build config 00:01:18.206 test-acl: explicitly disabled via build config 00:01:18.206 test-bbdev: explicitly disabled via build config 00:01:18.206 test-cmdline: explicitly disabled via build config 00:01:18.206 test-compress-perf: explicitly disabled via build config 00:01:18.206 test-crypto-perf: explicitly disabled via build config 00:01:18.206 test-dma-perf: explicitly disabled via build config 00:01:18.206 test-eventdev: explicitly disabled via build config 00:01:18.206 test-fib: explicitly disabled via build config 00:01:18.206 test-flow-perf: explicitly disabled via build config 00:01:18.206 test-gpudev: explicitly disabled via build config 00:01:18.206 test-mldev: explicitly disabled via build config 00:01:18.206 test-pipeline: explicitly disabled via build config 00:01:18.206 test-pmd: explicitly disabled via build config 00:01:18.206 test-regex: explicitly disabled via build config 00:01:18.206 test-sad: explicitly disabled via build config 00:01:18.206 test-security-perf: explicitly disabled via build config 00:01:18.206 00:01:18.206 libs: 00:01:18.206 metrics: explicitly disabled via build config 00:01:18.206 acl: explicitly disabled via build config 00:01:18.206 bbdev: explicitly disabled via build config 00:01:18.206 bitratestats: explicitly disabled via build config 00:01:18.206 bpf: explicitly disabled via build config 00:01:18.206 cfgfile: explicitly disabled via build config 00:01:18.206 distributor: explicitly disabled via build config 00:01:18.206 efd: explicitly disabled via build config 00:01:18.206 eventdev: explicitly disabled via build config 00:01:18.206 dispatcher: explicitly disabled via build config 00:01:18.206 gpudev: explicitly disabled via build config 00:01:18.206 gro: explicitly disabled via build config 00:01:18.206 gso: explicitly disabled via build config 00:01:18.206 ip_frag: explicitly disabled via build config 00:01:18.206 jobstats: explicitly disabled via build config 00:01:18.206 latencystats: explicitly disabled via build config 00:01:18.206 lpm: explicitly disabled via build config 00:01:18.206 member: explicitly disabled via build config 00:01:18.206 pcapng: explicitly disabled via build config 00:01:18.206 rawdev: explicitly disabled via build config 00:01:18.206 regexdev: explicitly disabled via build config 00:01:18.206 mldev: explicitly disabled via build config 00:01:18.206 rib: explicitly disabled via build config 00:01:18.206 sched: explicitly disabled via build config 00:01:18.206 stack: explicitly disabled via build config 00:01:18.206 ipsec: explicitly disabled via build config 00:01:18.206 pdcp: explicitly disabled via build config 00:01:18.206 fib: explicitly disabled via build config 00:01:18.206 port: explicitly disabled via build config 00:01:18.206 pdump: explicitly disabled via build config 00:01:18.206 table: explicitly disabled via build config 00:01:18.206 pipeline: explicitly disabled via build config 00:01:18.206 graph: explicitly disabled via build config 00:01:18.206 node: explicitly disabled via build config 00:01:18.206 00:01:18.206 drivers: 00:01:18.206 common/cpt: not in enabled drivers build config 00:01:18.206 common/dpaax: not in enabled drivers build config 00:01:18.206 common/iavf: not in enabled drivers build config 00:01:18.206 common/idpf: not in enabled drivers build config 00:01:18.206 common/mvep: not in enabled drivers build config 00:01:18.206 common/octeontx: not in enabled drivers build config 00:01:18.206 bus/auxiliary: not in enabled drivers build config 00:01:18.206 bus/cdx: not in enabled drivers build config 00:01:18.206 bus/dpaa: not in enabled drivers build config 00:01:18.206 bus/fslmc: not in enabled drivers build config 00:01:18.206 bus/ifpga: not in enabled drivers build config 00:01:18.206 bus/platform: not in enabled drivers build config 00:01:18.206 bus/vmbus: not in enabled drivers build config 00:01:18.206 common/cnxk: not in enabled drivers build config 00:01:18.206 common/mlx5: not in enabled drivers build config 00:01:18.206 common/nfp: not in enabled drivers build config 00:01:18.206 common/qat: not in enabled drivers build config 00:01:18.206 common/sfc_efx: not in enabled drivers build config 00:01:18.206 mempool/bucket: not in enabled drivers build config 00:01:18.206 mempool/cnxk: not in enabled drivers build config 00:01:18.206 mempool/dpaa: not in enabled drivers build config 00:01:18.206 mempool/dpaa2: not in enabled drivers build config 00:01:18.206 mempool/octeontx: not in enabled drivers build config 00:01:18.206 mempool/stack: not in enabled drivers build config 00:01:18.206 dma/cnxk: not in enabled drivers build config 00:01:18.206 dma/dpaa: not in enabled drivers build config 00:01:18.206 dma/dpaa2: not in enabled drivers build config 00:01:18.206 dma/hisilicon: not in enabled drivers build config 00:01:18.206 dma/idxd: not in enabled drivers build config 00:01:18.206 dma/ioat: not in enabled drivers build config 00:01:18.206 dma/skeleton: not in enabled drivers build config 00:01:18.206 net/af_packet: not in enabled drivers build config 00:01:18.206 net/af_xdp: not in enabled drivers build config 00:01:18.206 net/ark: not in enabled drivers build config 00:01:18.206 net/atlantic: not in enabled drivers build config 00:01:18.206 net/avp: not in enabled drivers build config 00:01:18.206 net/axgbe: not in enabled drivers build config 00:01:18.206 net/bnx2x: not in enabled drivers build config 00:01:18.206 net/bnxt: not in enabled drivers build config 00:01:18.206 net/bonding: not in enabled drivers build config 00:01:18.206 net/cnxk: not in enabled drivers build config 00:01:18.206 net/cpfl: not in enabled drivers build config 00:01:18.206 net/cxgbe: not in enabled drivers build config 00:01:18.206 net/dpaa: not in enabled drivers build config 00:01:18.206 net/dpaa2: not in enabled drivers build config 00:01:18.206 net/e1000: not in enabled drivers build config 00:01:18.206 net/ena: not in enabled drivers build config 00:01:18.206 net/enetc: not in enabled drivers build config 00:01:18.206 net/enetfec: not in enabled drivers build config 00:01:18.206 net/enic: not in enabled drivers build config 00:01:18.206 net/failsafe: not in enabled drivers build config 00:01:18.207 net/fm10k: not in enabled drivers build config 00:01:18.207 net/gve: not in enabled drivers build config 00:01:18.207 net/hinic: not in enabled drivers build config 00:01:18.207 net/hns3: not in enabled drivers build config 00:01:18.207 net/i40e: not in enabled drivers build config 00:01:18.207 net/iavf: not in enabled drivers build config 00:01:18.207 net/ice: not in enabled drivers build config 00:01:18.207 net/idpf: not in enabled drivers build config 00:01:18.207 net/igc: not in enabled drivers build config 00:01:18.207 net/ionic: not in enabled drivers build config 00:01:18.207 net/ipn3ke: not in enabled drivers build config 00:01:18.207 net/ixgbe: not in enabled drivers build config 00:01:18.207 net/mana: not in enabled drivers build config 00:01:18.207 net/memif: not in enabled drivers build config 00:01:18.207 net/mlx4: not in enabled drivers build config 00:01:18.207 net/mlx5: not in enabled drivers build config 00:01:18.207 net/mvneta: not in enabled drivers build config 00:01:18.207 net/mvpp2: not in enabled drivers build config 00:01:18.207 net/netvsc: not in enabled drivers build config 00:01:18.207 net/nfb: not in enabled drivers build config 00:01:18.207 net/nfp: not in enabled drivers build config 00:01:18.207 net/ngbe: not in enabled drivers build config 00:01:18.207 net/null: not in enabled drivers build config 00:01:18.207 net/octeontx: not in enabled drivers build config 00:01:18.207 net/octeon_ep: not in enabled drivers build config 00:01:18.207 net/pcap: not in enabled drivers build config 00:01:18.207 net/pfe: not in enabled drivers build config 00:01:18.207 net/qede: not in enabled drivers build config 00:01:18.207 net/ring: not in enabled drivers build config 00:01:18.207 net/sfc: not in enabled drivers build config 00:01:18.207 net/softnic: not in enabled drivers build config 00:01:18.207 net/tap: not in enabled drivers build config 00:01:18.207 net/thunderx: not in enabled drivers build config 00:01:18.207 net/txgbe: not in enabled drivers build config 00:01:18.207 net/vdev_netvsc: not in enabled drivers build config 00:01:18.207 net/vhost: not in enabled drivers build config 00:01:18.207 net/virtio: not in enabled drivers build config 00:01:18.207 net/vmxnet3: not in enabled drivers build config 00:01:18.207 raw/*: missing internal dependency, "rawdev" 00:01:18.207 crypto/armv8: not in enabled drivers build config 00:01:18.207 crypto/bcmfs: not in enabled drivers build config 00:01:18.207 crypto/caam_jr: not in enabled drivers build config 00:01:18.207 crypto/ccp: not in enabled drivers build config 00:01:18.207 crypto/cnxk: not in enabled drivers build config 00:01:18.207 crypto/dpaa_sec: not in enabled drivers build config 00:01:18.207 crypto/dpaa2_sec: not in enabled drivers build config 00:01:18.207 crypto/ipsec_mb: not in enabled drivers build config 00:01:18.207 crypto/mlx5: not in enabled drivers build config 00:01:18.207 crypto/mvsam: not in enabled drivers build config 00:01:18.207 crypto/nitrox: not in enabled drivers build config 00:01:18.207 crypto/null: not in enabled drivers build config 00:01:18.207 crypto/octeontx: not in enabled drivers build config 00:01:18.207 crypto/openssl: not in enabled drivers build config 00:01:18.207 crypto/scheduler: not in enabled drivers build config 00:01:18.207 crypto/uadk: not in enabled drivers build config 00:01:18.207 crypto/virtio: not in enabled drivers build config 00:01:18.207 compress/isal: not in enabled drivers build config 00:01:18.207 compress/mlx5: not in enabled drivers build config 00:01:18.207 compress/octeontx: not in enabled drivers build config 00:01:18.207 compress/zlib: not in enabled drivers build config 00:01:18.207 regex/*: missing internal dependency, "regexdev" 00:01:18.207 ml/*: missing internal dependency, "mldev" 00:01:18.207 vdpa/ifc: not in enabled drivers build config 00:01:18.207 vdpa/mlx5: not in enabled drivers build config 00:01:18.207 vdpa/nfp: not in enabled drivers build config 00:01:18.207 vdpa/sfc: not in enabled drivers build config 00:01:18.207 event/*: missing internal dependency, "eventdev" 00:01:18.207 baseband/*: missing internal dependency, "bbdev" 00:01:18.207 gpu/*: missing internal dependency, "gpudev" 00:01:18.207 00:01:18.207 00:01:18.468 Build targets in project: 84 00:01:18.468 00:01:18.468 DPDK 23.11.0 00:01:18.468 00:01:18.468 User defined options 00:01:18.468 buildtype : debug 00:01:18.468 default_library : shared 00:01:18.468 libdir : lib 00:01:18.468 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:18.468 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:01:18.468 c_link_args : 00:01:18.468 cpu_instruction_set: native 00:01:18.468 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:18.468 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,pcapng,bbdev 00:01:18.468 enable_docs : false 00:01:18.468 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:18.468 enable_kmods : false 00:01:18.468 tests : false 00:01:18.468 00:01:18.468 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:19.047 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:19.047 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:19.047 [2/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:19.047 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:19.047 [4/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:19.047 [5/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:19.047 [6/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:19.047 [7/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:19.047 [8/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:19.047 [9/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:19.047 [10/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:19.047 [11/264] Linking static target lib/librte_kvargs.a 00:01:19.047 [12/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:19.047 [13/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:19.047 [14/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:19.047 [15/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:19.047 [16/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:19.047 [17/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:19.047 [18/264] Linking static target lib/librte_log.a 00:01:19.309 [19/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:19.309 [20/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:19.309 [21/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:19.309 [22/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:19.309 [23/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:19.309 [24/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:19.309 [25/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:19.309 [26/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:19.309 [27/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:19.309 [28/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:19.309 [29/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:19.309 [30/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:19.309 [31/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:19.309 [32/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:19.309 [33/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:19.309 [34/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:19.309 [35/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:19.309 [36/264] Linking static target lib/librte_pci.a 00:01:19.309 [37/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:19.309 [38/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:19.309 [39/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:19.309 [40/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:19.309 [41/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:19.309 [42/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:19.309 [43/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:19.569 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:19.570 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:19.570 [46/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:19.570 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:19.570 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:19.570 [49/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.570 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:19.570 [51/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:19.570 [52/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:19.570 [53/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.570 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:19.570 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:19.570 [56/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:19.570 [57/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:19.570 [58/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:19.570 [59/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:19.570 [60/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:19.570 [61/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:19.570 [62/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:19.570 [63/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:19.570 [64/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:19.570 [65/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:19.570 [66/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:19.570 [67/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:19.570 [68/264] Linking static target lib/librte_telemetry.a 00:01:19.570 [69/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:19.570 [70/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:19.570 [71/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:19.570 [72/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:19.570 [73/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:19.570 [74/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:19.570 [75/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:19.570 [76/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:19.570 [77/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:19.570 [78/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:19.570 [79/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:19.570 [80/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:19.570 [81/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:19.570 [82/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:19.570 [83/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:19.570 [84/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:19.570 [85/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:19.570 [86/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:19.570 [87/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:19.570 [88/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:19.570 [89/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:19.570 [90/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:19.570 [91/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:19.570 [92/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:19.570 [93/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:19.570 [94/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:19.570 [95/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:19.570 [96/264] Linking static target lib/librte_cmdline.a 00:01:19.570 [97/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:19.570 [98/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:19.570 [99/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:19.830 [100/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:19.830 [101/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:19.830 [102/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:19.830 [103/264] Linking static target lib/librte_meter.a 00:01:19.830 [104/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:19.830 [105/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:19.830 [106/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:19.830 [107/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:19.830 [108/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:19.830 [109/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:19.830 [110/264] Linking static target lib/librte_ring.a 00:01:19.830 [111/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:19.830 [112/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:19.830 [113/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:19.830 [114/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:19.830 [115/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:19.830 [116/264] Linking static target lib/librte_mempool.a 00:01:19.830 [117/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:19.830 [118/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:19.830 [119/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:19.830 [120/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:19.830 [121/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:19.830 [122/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:19.830 [123/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:19.830 [124/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:19.830 [125/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:19.830 [126/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:19.830 [127/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:19.830 [128/264] Linking static target lib/librte_timer.a 00:01:19.830 [129/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:19.830 [130/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:19.830 [131/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:19.830 [132/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:19.830 [133/264] Linking static target lib/librte_compressdev.a 00:01:19.830 [134/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:19.830 [135/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:19.830 [136/264] Linking static target lib/librte_net.a 00:01:19.830 [137/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:19.830 [138/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.830 [139/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:19.830 [140/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:19.830 [141/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:19.830 [142/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:19.830 [143/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:19.830 [144/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:19.830 [145/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:19.830 [146/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:19.830 [147/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:19.830 [148/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:19.830 [149/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:19.830 [150/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:19.830 [151/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:19.830 [152/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:19.830 [153/264] Linking static target lib/librte_power.a 00:01:19.830 [154/264] Linking target lib/librte_log.so.24.0 00:01:19.830 [155/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:19.830 [156/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:19.830 [157/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:19.830 [158/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:19.830 [159/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:19.830 [160/264] Linking static target lib/librte_rcu.a 00:01:19.830 [161/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:19.830 [162/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:19.830 [163/264] Linking static target lib/librte_security.a 00:01:19.830 [164/264] Linking static target lib/librte_dmadev.a 00:01:19.830 [165/264] Linking static target lib/librte_reorder.a 00:01:19.830 [166/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:19.830 [167/264] Linking static target lib/librte_eal.a 00:01:19.830 [168/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:19.830 [169/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:19.831 [170/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:19.831 [171/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:19.831 [172/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:19.831 [173/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:19.831 [174/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:20.092 [175/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:20.092 [176/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:20.092 [177/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:20.092 [178/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:20.092 [179/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:20.092 [180/264] Linking static target lib/librte_mbuf.a 00:01:20.092 [181/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.092 [182/264] Linking target lib/librte_kvargs.so.24.0 00:01:20.092 [183/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:20.092 [184/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:20.092 [185/264] Linking static target drivers/librte_bus_vdev.a 00:01:20.092 [186/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:20.092 [187/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:20.092 [188/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:20.092 [189/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:20.092 [190/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.092 [191/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:20.092 [192/264] Linking static target lib/librte_hash.a 00:01:20.092 [193/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:20.092 [194/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:20.092 [195/264] Linking static target drivers/librte_bus_pci.a 00:01:20.092 [196/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.092 [197/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:20.092 [198/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:20.092 [199/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:20.092 [200/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:20.092 [201/264] Linking static target drivers/librte_mempool_ring.a 00:01:20.352 [202/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.352 [203/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:20.352 [204/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.352 [205/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.352 [206/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:20.352 [207/264] Linking target lib/librte_telemetry.so.24.0 00:01:20.352 [208/264] Linking static target lib/librte_cryptodev.a 00:01:20.352 [209/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.352 [210/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.352 [211/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:20.352 [212/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.613 [213/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.613 [214/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:20.613 [215/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.613 [216/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:20.613 [217/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.613 [218/264] Linking static target lib/librte_ethdev.a 00:01:20.874 [219/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.874 [220/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.874 [221/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.874 [222/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.136 [223/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.709 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:21.709 [225/264] Linking static target lib/librte_vhost.a 00:01:22.651 [226/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.036 [227/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.622 [228/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.003 [229/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.003 [230/264] Linking target lib/librte_eal.so.24.0 00:01:32.003 [231/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:32.003 [232/264] Linking target lib/librte_meter.so.24.0 00:01:32.003 [233/264] Linking target lib/librte_ring.so.24.0 00:01:32.003 [234/264] Linking target lib/librte_pci.so.24.0 00:01:32.003 [235/264] Linking target lib/librte_timer.so.24.0 00:01:32.003 [236/264] Linking target drivers/librte_bus_vdev.so.24.0 00:01:32.003 [237/264] Linking target lib/librte_dmadev.so.24.0 00:01:32.262 [238/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:32.262 [239/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:32.262 [240/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:32.262 [241/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:32.262 [242/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:32.262 [243/264] Linking target drivers/librte_bus_pci.so.24.0 00:01:32.262 [244/264] Linking target lib/librte_mempool.so.24.0 00:01:32.262 [245/264] Linking target lib/librte_rcu.so.24.0 00:01:32.521 [246/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:32.521 [247/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:32.521 [248/264] Linking target lib/librte_mbuf.so.24.0 00:01:32.521 [249/264] Linking target drivers/librte_mempool_ring.so.24.0 00:01:32.780 [250/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:32.780 [251/264] Linking target lib/librte_net.so.24.0 00:01:32.780 [252/264] Linking target lib/librte_compressdev.so.24.0 00:01:32.780 [253/264] Linking target lib/librte_reorder.so.24.0 00:01:32.780 [254/264] Linking target lib/librte_cryptodev.so.24.0 00:01:32.780 [255/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:32.780 [256/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:33.041 [257/264] Linking target lib/librte_hash.so.24.0 00:01:33.041 [258/264] Linking target lib/librte_cmdline.so.24.0 00:01:33.041 [259/264] Linking target lib/librte_ethdev.so.24.0 00:01:33.041 [260/264] Linking target lib/librte_security.so.24.0 00:01:33.041 [261/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:33.041 [262/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:33.041 [263/264] Linking target lib/librte_power.so.24.0 00:01:33.041 [264/264] Linking target lib/librte_vhost.so.24.0 00:01:33.041 INFO: autodetecting backend as ninja 00:01:33.041 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:01:34.425 CC lib/log/log.o 00:01:34.425 CC lib/ut_mock/mock.o 00:01:34.425 CC lib/log/log_flags.o 00:01:34.425 CC lib/log/log_deprecated.o 00:01:34.425 CC lib/ut/ut.o 00:01:34.425 LIB libspdk_ut_mock.a 00:01:34.425 LIB libspdk_log.a 00:01:34.425 LIB libspdk_ut.a 00:01:34.425 SO libspdk_ut_mock.so.5.0 00:01:34.425 SO libspdk_log.so.6.1 00:01:34.425 SO libspdk_ut.so.1.0 00:01:34.425 SYMLINK libspdk_ut_mock.so 00:01:34.425 SYMLINK libspdk_log.so 00:01:34.425 SYMLINK libspdk_ut.so 00:01:34.686 CC lib/util/base64.o 00:01:34.686 CC lib/util/bit_array.o 00:01:34.686 CC lib/util/cpuset.o 00:01:34.686 CC lib/util/crc16.o 00:01:34.686 CC lib/util/crc32.o 00:01:34.686 CC lib/util/crc32c.o 00:01:34.686 CXX lib/trace_parser/trace.o 00:01:34.686 CC lib/util/crc32_ieee.o 00:01:34.686 CC lib/dma/dma.o 00:01:34.686 CC lib/ioat/ioat.o 00:01:34.686 CC lib/util/crc64.o 00:01:34.686 CC lib/util/dif.o 00:01:34.686 CC lib/util/fd.o 00:01:34.686 CC lib/util/file.o 00:01:34.686 CC lib/util/hexlify.o 00:01:34.686 CC lib/util/iov.o 00:01:34.686 CC lib/util/math.o 00:01:34.686 CC lib/util/pipe.o 00:01:34.686 CC lib/util/strerror_tls.o 00:01:34.686 CC lib/util/string.o 00:01:34.686 CC lib/util/uuid.o 00:01:34.686 CC lib/util/fd_group.o 00:01:34.686 CC lib/util/xor.o 00:01:34.686 CC lib/util/zipf.o 00:01:34.946 CC lib/vfio_user/host/vfio_user_pci.o 00:01:34.946 CC lib/vfio_user/host/vfio_user.o 00:01:34.946 LIB libspdk_dma.a 00:01:34.946 SO libspdk_dma.so.3.0 00:01:34.946 LIB libspdk_ioat.a 00:01:34.946 SYMLINK libspdk_dma.so 00:01:34.946 SO libspdk_ioat.so.6.0 00:01:35.207 LIB libspdk_vfio_user.a 00:01:35.207 SYMLINK libspdk_ioat.so 00:01:35.207 SO libspdk_vfio_user.so.4.0 00:01:35.207 SYMLINK libspdk_vfio_user.so 00:01:35.207 LIB libspdk_util.a 00:01:35.207 SO libspdk_util.so.8.0 00:01:35.468 SYMLINK libspdk_util.so 00:01:35.468 LIB libspdk_trace_parser.a 00:01:35.468 SO libspdk_trace_parser.so.4.0 00:01:35.729 CC lib/conf/conf.o 00:01:35.729 CC lib/json/json_parse.o 00:01:35.729 CC lib/json/json_util.o 00:01:35.729 CC lib/json/json_write.o 00:01:35.729 CC lib/vmd/vmd.o 00:01:35.729 CC lib/vmd/led.o 00:01:35.729 CC lib/env_dpdk/env.o 00:01:35.729 CC lib/env_dpdk/memory.o 00:01:35.729 CC lib/rdma/common.o 00:01:35.729 CC lib/env_dpdk/pci.o 00:01:35.729 CC lib/rdma/rdma_verbs.o 00:01:35.729 CC lib/env_dpdk/init.o 00:01:35.729 CC lib/idxd/idxd.o 00:01:35.729 CC lib/env_dpdk/threads.o 00:01:35.729 CC lib/idxd/idxd_user.o 00:01:35.729 CC lib/env_dpdk/pci_ioat.o 00:01:35.729 CC lib/idxd/idxd_kernel.o 00:01:35.729 CC lib/env_dpdk/pci_virtio.o 00:01:35.729 CC lib/env_dpdk/pci_vmd.o 00:01:35.729 CC lib/env_dpdk/pci_idxd.o 00:01:35.729 CC lib/env_dpdk/pci_event.o 00:01:35.729 CC lib/env_dpdk/sigbus_handler.o 00:01:35.729 CC lib/env_dpdk/pci_dpdk.o 00:01:35.729 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:35.729 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:35.729 SYMLINK libspdk_trace_parser.so 00:01:35.992 LIB libspdk_conf.a 00:01:35.992 SO libspdk_conf.so.5.0 00:01:35.992 LIB libspdk_json.a 00:01:35.992 LIB libspdk_rdma.a 00:01:35.992 SYMLINK libspdk_conf.so 00:01:35.992 SO libspdk_json.so.5.1 00:01:35.992 SO libspdk_rdma.so.5.0 00:01:35.992 SYMLINK libspdk_json.so 00:01:35.992 SYMLINK libspdk_rdma.so 00:01:36.255 LIB libspdk_idxd.a 00:01:36.255 SO libspdk_idxd.so.11.0 00:01:36.255 LIB libspdk_vmd.a 00:01:36.255 SO libspdk_vmd.so.5.0 00:01:36.255 SYMLINK libspdk_idxd.so 00:01:36.255 CC lib/jsonrpc/jsonrpc_server.o 00:01:36.255 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:36.255 CC lib/jsonrpc/jsonrpc_client.o 00:01:36.255 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:36.255 SYMLINK libspdk_vmd.so 00:01:36.516 LIB libspdk_jsonrpc.a 00:01:36.516 SO libspdk_jsonrpc.so.5.1 00:01:36.777 SYMLINK libspdk_jsonrpc.so 00:01:36.777 CC lib/rpc/rpc.o 00:01:37.039 LIB libspdk_env_dpdk.a 00:01:37.039 SO libspdk_env_dpdk.so.13.0 00:01:37.039 LIB libspdk_rpc.a 00:01:37.039 SYMLINK libspdk_env_dpdk.so 00:01:37.039 SO libspdk_rpc.so.5.0 00:01:37.300 SYMLINK libspdk_rpc.so 00:01:37.561 CC lib/trace/trace.o 00:01:37.561 CC lib/notify/notify.o 00:01:37.561 CC lib/trace/trace_flags.o 00:01:37.561 CC lib/notify/notify_rpc.o 00:01:37.561 CC lib/trace/trace_rpc.o 00:01:37.561 CC lib/sock/sock.o 00:01:37.561 CC lib/sock/sock_rpc.o 00:01:37.561 LIB libspdk_notify.a 00:01:37.561 SO libspdk_notify.so.5.0 00:01:37.561 LIB libspdk_trace.a 00:01:37.822 SO libspdk_trace.so.9.0 00:01:37.822 SYMLINK libspdk_notify.so 00:01:37.822 SYMLINK libspdk_trace.so 00:01:37.822 LIB libspdk_sock.a 00:01:37.822 SO libspdk_sock.so.8.0 00:01:37.822 SYMLINK libspdk_sock.so 00:01:38.084 CC lib/thread/thread.o 00:01:38.084 CC lib/thread/iobuf.o 00:01:38.084 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:38.084 CC lib/nvme/nvme_ctrlr.o 00:01:38.084 CC lib/nvme/nvme_fabric.o 00:01:38.084 CC lib/nvme/nvme_ns_cmd.o 00:01:38.084 CC lib/nvme/nvme_ns.o 00:01:38.084 CC lib/nvme/nvme_pcie_common.o 00:01:38.084 CC lib/nvme/nvme_pcie.o 00:01:38.084 CC lib/nvme/nvme_qpair.o 00:01:38.084 CC lib/nvme/nvme.o 00:01:38.084 CC lib/nvme/nvme_quirks.o 00:01:38.084 CC lib/nvme/nvme_transport.o 00:01:38.084 CC lib/nvme/nvme_discovery.o 00:01:38.084 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:38.084 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:38.084 CC lib/nvme/nvme_tcp.o 00:01:38.084 CC lib/nvme/nvme_opal.o 00:01:38.084 CC lib/nvme/nvme_io_msg.o 00:01:38.084 CC lib/nvme/nvme_poll_group.o 00:01:38.084 CC lib/nvme/nvme_zns.o 00:01:38.084 CC lib/nvme/nvme_cuse.o 00:01:38.084 CC lib/nvme/nvme_vfio_user.o 00:01:38.084 CC lib/nvme/nvme_rdma.o 00:01:39.473 LIB libspdk_thread.a 00:01:39.474 SO libspdk_thread.so.9.0 00:01:39.474 SYMLINK libspdk_thread.so 00:01:39.734 CC lib/blob/blobstore.o 00:01:39.734 CC lib/blob/request.o 00:01:39.734 CC lib/blob/zeroes.o 00:01:39.734 CC lib/blob/blob_bs_dev.o 00:01:39.734 CC lib/init/json_config.o 00:01:39.734 CC lib/init/subsystem.o 00:01:39.734 CC lib/init/subsystem_rpc.o 00:01:39.734 CC lib/accel/accel.o 00:01:39.734 CC lib/init/rpc.o 00:01:39.734 CC lib/virtio/virtio.o 00:01:39.734 CC lib/accel/accel_rpc.o 00:01:39.734 CC lib/virtio/virtio_vhost_user.o 00:01:39.734 CC lib/accel/accel_sw.o 00:01:39.734 CC lib/virtio/virtio_vfio_user.o 00:01:39.734 CC lib/virtio/virtio_pci.o 00:01:39.996 LIB libspdk_init.a 00:01:39.996 SO libspdk_init.so.4.0 00:01:39.996 LIB libspdk_nvme.a 00:01:39.996 LIB libspdk_virtio.a 00:01:39.996 SYMLINK libspdk_init.so 00:01:39.996 SO libspdk_virtio.so.6.0 00:01:40.258 SO libspdk_nvme.so.12.0 00:01:40.258 SYMLINK libspdk_virtio.so 00:01:40.258 CC lib/event/app.o 00:01:40.258 CC lib/event/reactor.o 00:01:40.258 CC lib/event/log_rpc.o 00:01:40.258 CC lib/event/app_rpc.o 00:01:40.258 CC lib/event/scheduler_static.o 00:01:40.519 SYMLINK libspdk_nvme.so 00:01:40.779 LIB libspdk_accel.a 00:01:40.780 SO libspdk_accel.so.14.0 00:01:40.780 LIB libspdk_event.a 00:01:40.780 SO libspdk_event.so.12.0 00:01:40.780 SYMLINK libspdk_accel.so 00:01:40.780 SYMLINK libspdk_event.so 00:01:41.042 CC lib/bdev/bdev.o 00:01:41.042 CC lib/bdev/bdev_rpc.o 00:01:41.042 CC lib/bdev/bdev_zone.o 00:01:41.042 CC lib/bdev/part.o 00:01:41.042 CC lib/bdev/scsi_nvme.o 00:01:42.429 LIB libspdk_blob.a 00:01:42.429 SO libspdk_blob.so.10.1 00:01:42.429 SYMLINK libspdk_blob.so 00:01:42.429 CC lib/blobfs/blobfs.o 00:01:42.429 CC lib/blobfs/tree.o 00:01:42.429 CC lib/lvol/lvol.o 00:01:43.373 LIB libspdk_bdev.a 00:01:43.373 LIB libspdk_blobfs.a 00:01:43.373 SO libspdk_blobfs.so.9.0 00:01:43.373 SO libspdk_bdev.so.14.0 00:01:43.373 LIB libspdk_lvol.a 00:01:43.373 SYMLINK libspdk_blobfs.so 00:01:43.373 SO libspdk_lvol.so.9.1 00:01:43.373 SYMLINK libspdk_bdev.so 00:01:43.373 SYMLINK libspdk_lvol.so 00:01:43.634 CC lib/nvmf/ctrlr.o 00:01:43.634 CC lib/scsi/dev.o 00:01:43.634 CC lib/nvmf/ctrlr_discovery.o 00:01:43.634 CC lib/scsi/lun.o 00:01:43.634 CC lib/nbd/nbd.o 00:01:43.634 CC lib/scsi/port.o 00:01:43.634 CC lib/nvmf/ctrlr_bdev.o 00:01:43.634 CC lib/nvmf/subsystem.o 00:01:43.634 CC lib/scsi/scsi.o 00:01:43.635 CC lib/nbd/nbd_rpc.o 00:01:43.635 CC lib/nvmf/nvmf.o 00:01:43.635 CC lib/scsi/scsi_bdev.o 00:01:43.635 CC lib/scsi/scsi_pr.o 00:01:43.635 CC lib/ftl/ftl_core.o 00:01:43.635 CC lib/nvmf/nvmf_rpc.o 00:01:43.635 CC lib/scsi/scsi_rpc.o 00:01:43.635 CC lib/ftl/ftl_init.o 00:01:43.635 CC lib/nvmf/transport.o 00:01:43.635 CC lib/scsi/task.o 00:01:43.635 CC lib/ublk/ublk.o 00:01:43.635 CC lib/nvmf/tcp.o 00:01:43.635 CC lib/ftl/ftl_layout.o 00:01:43.635 CC lib/nvmf/rdma.o 00:01:43.635 CC lib/ublk/ublk_rpc.o 00:01:43.635 CC lib/ftl/ftl_debug.o 00:01:43.635 CC lib/ftl/ftl_io.o 00:01:43.635 CC lib/ftl/ftl_sb.o 00:01:43.635 CC lib/ftl/ftl_l2p.o 00:01:43.635 CC lib/ftl/ftl_l2p_flat.o 00:01:43.635 CC lib/ftl/ftl_nv_cache.o 00:01:43.635 CC lib/ftl/ftl_band.o 00:01:43.635 CC lib/ftl/ftl_band_ops.o 00:01:43.635 CC lib/ftl/ftl_writer.o 00:01:43.635 CC lib/ftl/ftl_rq.o 00:01:43.635 CC lib/ftl/ftl_reloc.o 00:01:43.635 CC lib/ftl/ftl_l2p_cache.o 00:01:43.635 CC lib/ftl/ftl_p2l.o 00:01:43.635 CC lib/ftl/mngt/ftl_mngt.o 00:01:43.635 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:43.635 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:43.635 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:43.635 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:43.635 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:43.635 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:43.635 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:43.635 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:43.635 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:43.635 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:43.635 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:43.635 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:43.635 CC lib/ftl/utils/ftl_conf.o 00:01:43.635 CC lib/ftl/utils/ftl_md.o 00:01:43.635 CC lib/ftl/utils/ftl_mempool.o 00:01:43.635 CC lib/ftl/utils/ftl_bitmap.o 00:01:43.635 CC lib/ftl/utils/ftl_property.o 00:01:43.635 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:43.635 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:43.635 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:43.635 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:43.635 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:43.635 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:43.635 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:43.635 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:43.635 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:43.635 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:43.635 CC lib/ftl/base/ftl_base_dev.o 00:01:43.635 CC lib/ftl/base/ftl_base_bdev.o 00:01:43.635 CC lib/ftl/ftl_trace.o 00:01:44.207 LIB libspdk_nbd.a 00:01:44.207 SO libspdk_nbd.so.6.0 00:01:44.207 SYMLINK libspdk_nbd.so 00:01:44.207 LIB libspdk_scsi.a 00:01:44.207 SO libspdk_scsi.so.8.0 00:01:44.207 LIB libspdk_ublk.a 00:01:44.207 SYMLINK libspdk_scsi.so 00:01:44.469 SO libspdk_ublk.so.2.0 00:01:44.469 SYMLINK libspdk_ublk.so 00:01:44.469 CC lib/iscsi/conn.o 00:01:44.469 CC lib/vhost/vhost.o 00:01:44.469 CC lib/iscsi/init_grp.o 00:01:44.469 CC lib/vhost/vhost_rpc.o 00:01:44.469 CC lib/iscsi/iscsi.o 00:01:44.469 CC lib/vhost/vhost_scsi.o 00:01:44.469 CC lib/iscsi/md5.o 00:01:44.469 CC lib/vhost/vhost_blk.o 00:01:44.469 CC lib/iscsi/param.o 00:01:44.469 CC lib/vhost/rte_vhost_user.o 00:01:44.469 CC lib/iscsi/portal_grp.o 00:01:44.469 CC lib/iscsi/tgt_node.o 00:01:44.469 CC lib/iscsi/iscsi_subsystem.o 00:01:44.469 CC lib/iscsi/iscsi_rpc.o 00:01:44.469 CC lib/iscsi/task.o 00:01:44.469 LIB libspdk_ftl.a 00:01:44.730 SO libspdk_ftl.so.8.0 00:01:44.992 SYMLINK libspdk_ftl.so 00:01:45.565 LIB libspdk_nvmf.a 00:01:45.565 LIB libspdk_vhost.a 00:01:45.565 SO libspdk_nvmf.so.17.0 00:01:45.565 SO libspdk_vhost.so.7.1 00:01:45.565 SYMLINK libspdk_vhost.so 00:01:45.827 SYMLINK libspdk_nvmf.so 00:01:45.827 LIB libspdk_iscsi.a 00:01:45.827 SO libspdk_iscsi.so.7.0 00:01:45.827 SYMLINK libspdk_iscsi.so 00:01:46.401 CC module/env_dpdk/env_dpdk_rpc.o 00:01:46.401 CC module/accel/error/accel_error.o 00:01:46.401 CC module/blob/bdev/blob_bdev.o 00:01:46.401 CC module/accel/error/accel_error_rpc.o 00:01:46.401 CC module/sock/posix/posix.o 00:01:46.401 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:46.401 CC module/accel/ioat/accel_ioat.o 00:01:46.401 CC module/accel/ioat/accel_ioat_rpc.o 00:01:46.401 CC module/scheduler/gscheduler/gscheduler.o 00:01:46.401 CC module/accel/dsa/accel_dsa.o 00:01:46.401 CC module/accel/dsa/accel_dsa_rpc.o 00:01:46.401 CC module/accel/iaa/accel_iaa.o 00:01:46.401 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:46.401 CC module/accel/iaa/accel_iaa_rpc.o 00:01:46.401 LIB libspdk_env_dpdk_rpc.a 00:01:46.401 SO libspdk_env_dpdk_rpc.so.5.0 00:01:46.663 SYMLINK libspdk_env_dpdk_rpc.so 00:01:46.663 LIB libspdk_scheduler_dpdk_governor.a 00:01:46.663 LIB libspdk_scheduler_gscheduler.a 00:01:46.663 LIB libspdk_accel_error.a 00:01:46.663 LIB libspdk_accel_ioat.a 00:01:46.663 LIB libspdk_scheduler_dynamic.a 00:01:46.663 SO libspdk_scheduler_dpdk_governor.so.3.0 00:01:46.663 LIB libspdk_accel_iaa.a 00:01:46.663 SO libspdk_scheduler_gscheduler.so.3.0 00:01:46.663 SO libspdk_accel_error.so.1.0 00:01:46.663 SO libspdk_accel_ioat.so.5.0 00:01:46.663 SO libspdk_scheduler_dynamic.so.3.0 00:01:46.663 LIB libspdk_blob_bdev.a 00:01:46.663 LIB libspdk_accel_dsa.a 00:01:46.663 SO libspdk_accel_iaa.so.2.0 00:01:46.663 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:46.663 SO libspdk_blob_bdev.so.10.1 00:01:46.663 SYMLINK libspdk_scheduler_gscheduler.so 00:01:46.663 SYMLINK libspdk_accel_ioat.so 00:01:46.663 SO libspdk_accel_dsa.so.4.0 00:01:46.663 SYMLINK libspdk_accel_error.so 00:01:46.663 SYMLINK libspdk_scheduler_dynamic.so 00:01:46.663 SYMLINK libspdk_accel_iaa.so 00:01:46.924 SYMLINK libspdk_blob_bdev.so 00:01:46.924 SYMLINK libspdk_accel_dsa.so 00:01:47.185 LIB libspdk_sock_posix.a 00:01:47.185 SO libspdk_sock_posix.so.5.0 00:01:47.185 CC module/bdev/lvol/vbdev_lvol.o 00:01:47.185 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:47.185 CC module/bdev/passthru/vbdev_passthru.o 00:01:47.185 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:47.185 CC module/bdev/gpt/gpt.o 00:01:47.185 CC module/bdev/nvme/bdev_nvme.o 00:01:47.185 CC module/bdev/gpt/vbdev_gpt.o 00:01:47.185 CC module/bdev/error/vbdev_error.o 00:01:47.185 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:47.185 CC module/bdev/delay/vbdev_delay.o 00:01:47.185 CC module/bdev/error/vbdev_error_rpc.o 00:01:47.185 CC module/bdev/null/bdev_null.o 00:01:47.185 CC module/bdev/nvme/nvme_rpc.o 00:01:47.185 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:47.185 CC module/bdev/malloc/bdev_malloc.o 00:01:47.185 CC module/bdev/null/bdev_null_rpc.o 00:01:47.185 CC module/bdev/nvme/bdev_mdns_client.o 00:01:47.185 CC module/blobfs/bdev/blobfs_bdev.o 00:01:47.185 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:47.185 CC module/bdev/nvme/vbdev_opal.o 00:01:47.185 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:47.185 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:47.185 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:47.185 CC module/bdev/iscsi/bdev_iscsi.o 00:01:47.185 CC module/bdev/split/vbdev_split.o 00:01:47.185 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:47.185 CC module/bdev/raid/bdev_raid_rpc.o 00:01:47.185 CC module/bdev/raid/bdev_raid.o 00:01:47.185 CC module/bdev/split/vbdev_split_rpc.o 00:01:47.185 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:47.185 CC module/bdev/raid/bdev_raid_sb.o 00:01:47.185 CC module/bdev/ftl/bdev_ftl.o 00:01:47.185 CC module/bdev/raid/raid0.o 00:01:47.185 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:47.185 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:47.185 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:47.185 CC module/bdev/raid/raid1.o 00:01:47.185 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:47.185 CC module/bdev/raid/concat.o 00:01:47.185 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:47.185 CC module/bdev/aio/bdev_aio.o 00:01:47.185 CC module/bdev/aio/bdev_aio_rpc.o 00:01:47.185 SYMLINK libspdk_sock_posix.so 00:01:47.447 LIB libspdk_blobfs_bdev.a 00:01:47.447 SO libspdk_blobfs_bdev.so.5.0 00:01:47.447 LIB libspdk_bdev_split.a 00:01:47.447 LIB libspdk_bdev_null.a 00:01:47.447 LIB libspdk_bdev_error.a 00:01:47.447 LIB libspdk_bdev_gpt.a 00:01:47.447 SO libspdk_bdev_split.so.5.0 00:01:47.447 SYMLINK libspdk_blobfs_bdev.so 00:01:47.447 LIB libspdk_bdev_passthru.a 00:01:47.447 SO libspdk_bdev_gpt.so.5.0 00:01:47.447 SO libspdk_bdev_null.so.5.0 00:01:47.447 SO libspdk_bdev_error.so.5.0 00:01:47.447 LIB libspdk_bdev_ftl.a 00:01:47.709 SO libspdk_bdev_passthru.so.5.0 00:01:47.709 LIB libspdk_bdev_aio.a 00:01:47.709 LIB libspdk_bdev_delay.a 00:01:47.709 LIB libspdk_bdev_zone_block.a 00:01:47.709 SO libspdk_bdev_ftl.so.5.0 00:01:47.709 SYMLINK libspdk_bdev_split.so 00:01:47.709 SYMLINK libspdk_bdev_null.so 00:01:47.709 SYMLINK libspdk_bdev_gpt.so 00:01:47.709 SYMLINK libspdk_bdev_error.so 00:01:47.709 LIB libspdk_bdev_malloc.a 00:01:47.709 LIB libspdk_bdev_iscsi.a 00:01:47.709 SO libspdk_bdev_aio.so.5.0 00:01:47.709 SO libspdk_bdev_zone_block.so.5.0 00:01:47.709 SO libspdk_bdev_delay.so.5.0 00:01:47.709 SYMLINK libspdk_bdev_passthru.so 00:01:47.709 SO libspdk_bdev_malloc.so.5.0 00:01:47.709 SO libspdk_bdev_iscsi.so.5.0 00:01:47.709 SYMLINK libspdk_bdev_ftl.so 00:01:47.709 SYMLINK libspdk_bdev_aio.so 00:01:47.709 SYMLINK libspdk_bdev_zone_block.so 00:01:47.709 SYMLINK libspdk_bdev_delay.so 00:01:47.709 LIB libspdk_bdev_lvol.a 00:01:47.709 SYMLINK libspdk_bdev_malloc.so 00:01:47.709 LIB libspdk_bdev_virtio.a 00:01:47.709 SYMLINK libspdk_bdev_iscsi.so 00:01:47.709 SO libspdk_bdev_lvol.so.5.0 00:01:47.709 SO libspdk_bdev_virtio.so.5.0 00:01:47.971 SYMLINK libspdk_bdev_lvol.so 00:01:47.971 SYMLINK libspdk_bdev_virtio.so 00:01:47.971 LIB libspdk_bdev_raid.a 00:01:48.233 SO libspdk_bdev_raid.so.5.0 00:01:48.233 SYMLINK libspdk_bdev_raid.so 00:01:49.172 LIB libspdk_bdev_nvme.a 00:01:49.172 SO libspdk_bdev_nvme.so.6.0 00:01:49.432 SYMLINK libspdk_bdev_nvme.so 00:01:49.691 CC module/event/subsystems/iobuf/iobuf.o 00:01:49.691 CC module/event/subsystems/vmd/vmd.o 00:01:49.691 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:49.691 CC module/event/subsystems/scheduler/scheduler.o 00:01:49.691 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:49.691 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:49.691 CC module/event/subsystems/sock/sock.o 00:01:49.951 LIB libspdk_event_sock.a 00:01:49.951 LIB libspdk_event_vmd.a 00:01:49.951 LIB libspdk_event_scheduler.a 00:01:49.951 LIB libspdk_event_vhost_blk.a 00:01:49.951 LIB libspdk_event_iobuf.a 00:01:49.951 SO libspdk_event_scheduler.so.3.0 00:01:49.951 SO libspdk_event_sock.so.4.0 00:01:49.951 SO libspdk_event_vmd.so.5.0 00:01:49.951 SO libspdk_event_vhost_blk.so.2.0 00:01:49.951 SO libspdk_event_iobuf.so.2.0 00:01:49.951 SYMLINK libspdk_event_sock.so 00:01:49.951 SYMLINK libspdk_event_scheduler.so 00:01:49.951 SYMLINK libspdk_event_vhost_blk.so 00:01:49.951 SYMLINK libspdk_event_vmd.so 00:01:50.210 SYMLINK libspdk_event_iobuf.so 00:01:50.210 CC module/event/subsystems/accel/accel.o 00:01:50.470 LIB libspdk_event_accel.a 00:01:50.470 SO libspdk_event_accel.so.5.0 00:01:50.470 SYMLINK libspdk_event_accel.so 00:01:50.730 CC module/event/subsystems/bdev/bdev.o 00:01:50.991 LIB libspdk_event_bdev.a 00:01:50.991 SO libspdk_event_bdev.so.5.0 00:01:50.991 SYMLINK libspdk_event_bdev.so 00:01:51.252 CC module/event/subsystems/ublk/ublk.o 00:01:51.252 CC module/event/subsystems/scsi/scsi.o 00:01:51.252 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:51.252 CC module/event/subsystems/nbd/nbd.o 00:01:51.252 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:51.512 LIB libspdk_event_ublk.a 00:01:51.512 LIB libspdk_event_nbd.a 00:01:51.512 SO libspdk_event_ublk.so.2.0 00:01:51.512 LIB libspdk_event_scsi.a 00:01:51.512 SO libspdk_event_nbd.so.5.0 00:01:51.512 SO libspdk_event_scsi.so.5.0 00:01:51.512 LIB libspdk_event_nvmf.a 00:01:51.513 SYMLINK libspdk_event_ublk.so 00:01:51.513 SO libspdk_event_nvmf.so.5.0 00:01:51.513 SYMLINK libspdk_event_nbd.so 00:01:51.513 SYMLINK libspdk_event_scsi.so 00:01:51.772 SYMLINK libspdk_event_nvmf.so 00:01:51.772 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:51.772 CC module/event/subsystems/iscsi/iscsi.o 00:01:52.032 LIB libspdk_event_vhost_scsi.a 00:01:52.032 LIB libspdk_event_iscsi.a 00:01:52.032 SO libspdk_event_vhost_scsi.so.2.0 00:01:52.032 SO libspdk_event_iscsi.so.5.0 00:01:52.032 SYMLINK libspdk_event_vhost_scsi.so 00:01:52.032 SYMLINK libspdk_event_iscsi.so 00:01:52.292 SO libspdk.so.5.0 00:01:52.292 SYMLINK libspdk.so 00:01:52.552 CXX app/trace/trace.o 00:01:52.552 CC app/spdk_nvme_perf/perf.o 00:01:52.552 TEST_HEADER include/spdk/accel.h 00:01:52.552 TEST_HEADER include/spdk/accel_module.h 00:01:52.552 CC app/spdk_lspci/spdk_lspci.o 00:01:52.552 TEST_HEADER include/spdk/barrier.h 00:01:52.552 TEST_HEADER include/spdk/assert.h 00:01:52.552 TEST_HEADER include/spdk/base64.h 00:01:52.552 TEST_HEADER include/spdk/bdev.h 00:01:52.552 CC app/spdk_nvme_identify/identify.o 00:01:52.552 TEST_HEADER include/spdk/bdev_module.h 00:01:52.552 TEST_HEADER include/spdk/bdev_zone.h 00:01:52.552 CC app/trace_record/trace_record.o 00:01:52.552 TEST_HEADER include/spdk/bit_array.h 00:01:52.552 CC app/spdk_top/spdk_top.o 00:01:52.552 CC app/spdk_nvme_discover/discovery_aer.o 00:01:52.552 TEST_HEADER include/spdk/blob_bdev.h 00:01:52.552 TEST_HEADER include/spdk/bit_pool.h 00:01:52.552 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:52.552 CC test/rpc_client/rpc_client_test.o 00:01:52.552 TEST_HEADER include/spdk/blobfs.h 00:01:52.552 TEST_HEADER include/spdk/config.h 00:01:52.552 TEST_HEADER include/spdk/conf.h 00:01:52.552 TEST_HEADER include/spdk/blob.h 00:01:52.552 TEST_HEADER include/spdk/cpuset.h 00:01:52.552 TEST_HEADER include/spdk/crc16.h 00:01:52.552 TEST_HEADER include/spdk/dma.h 00:01:52.552 TEST_HEADER include/spdk/crc64.h 00:01:52.552 TEST_HEADER include/spdk/crc32.h 00:01:52.552 CC app/iscsi_tgt/iscsi_tgt.o 00:01:52.552 CC app/nvmf_tgt/nvmf_main.o 00:01:52.552 TEST_HEADER include/spdk/dif.h 00:01:52.552 TEST_HEADER include/spdk/endian.h 00:01:52.552 TEST_HEADER include/spdk/env_dpdk.h 00:01:52.552 TEST_HEADER include/spdk/env.h 00:01:52.552 TEST_HEADER include/spdk/event.h 00:01:52.552 TEST_HEADER include/spdk/fd_group.h 00:01:52.552 TEST_HEADER include/spdk/fd.h 00:01:52.552 TEST_HEADER include/spdk/ftl.h 00:01:52.552 TEST_HEADER include/spdk/file.h 00:01:52.552 TEST_HEADER include/spdk/gpt_spec.h 00:01:52.552 CC app/vhost/vhost.o 00:01:52.552 CC app/spdk_dd/spdk_dd.o 00:01:52.552 TEST_HEADER include/spdk/hexlify.h 00:01:52.552 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:52.552 TEST_HEADER include/spdk/histogram_data.h 00:01:52.552 TEST_HEADER include/spdk/idxd.h 00:01:52.552 TEST_HEADER include/spdk/idxd_spec.h 00:01:52.552 TEST_HEADER include/spdk/ioat.h 00:01:52.552 TEST_HEADER include/spdk/ioat_spec.h 00:01:52.552 TEST_HEADER include/spdk/init.h 00:01:52.552 TEST_HEADER include/spdk/iscsi_spec.h 00:01:52.552 TEST_HEADER include/spdk/json.h 00:01:52.552 TEST_HEADER include/spdk/jsonrpc.h 00:01:52.552 TEST_HEADER include/spdk/likely.h 00:01:52.552 TEST_HEADER include/spdk/log.h 00:01:52.552 CC app/spdk_tgt/spdk_tgt.o 00:01:52.552 TEST_HEADER include/spdk/lvol.h 00:01:52.552 TEST_HEADER include/spdk/memory.h 00:01:52.552 TEST_HEADER include/spdk/mmio.h 00:01:52.552 TEST_HEADER include/spdk/nbd.h 00:01:52.552 TEST_HEADER include/spdk/notify.h 00:01:52.552 TEST_HEADER include/spdk/nvme.h 00:01:52.552 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:52.552 TEST_HEADER include/spdk/nvme_intel.h 00:01:52.552 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:52.552 TEST_HEADER include/spdk/nvme_spec.h 00:01:52.552 TEST_HEADER include/spdk/nvme_zns.h 00:01:52.552 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:52.552 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:52.552 TEST_HEADER include/spdk/nvmf.h 00:01:52.552 TEST_HEADER include/spdk/nvmf_spec.h 00:01:52.552 TEST_HEADER include/spdk/opal_spec.h 00:01:52.552 TEST_HEADER include/spdk/nvmf_transport.h 00:01:52.552 TEST_HEADER include/spdk/pci_ids.h 00:01:52.552 TEST_HEADER include/spdk/opal.h 00:01:52.552 TEST_HEADER include/spdk/queue.h 00:01:52.552 TEST_HEADER include/spdk/pipe.h 00:01:52.552 TEST_HEADER include/spdk/reduce.h 00:01:52.829 TEST_HEADER include/spdk/rpc.h 00:01:52.829 TEST_HEADER include/spdk/scheduler.h 00:01:52.829 TEST_HEADER include/spdk/scsi_spec.h 00:01:52.829 TEST_HEADER include/spdk/scsi.h 00:01:52.829 TEST_HEADER include/spdk/sock.h 00:01:52.829 TEST_HEADER include/spdk/string.h 00:01:52.829 TEST_HEADER include/spdk/stdinc.h 00:01:52.829 TEST_HEADER include/spdk/thread.h 00:01:52.829 TEST_HEADER include/spdk/trace.h 00:01:52.829 TEST_HEADER include/spdk/trace_parser.h 00:01:52.829 TEST_HEADER include/spdk/tree.h 00:01:52.829 TEST_HEADER include/spdk/ublk.h 00:01:52.829 TEST_HEADER include/spdk/util.h 00:01:52.829 TEST_HEADER include/spdk/uuid.h 00:01:52.829 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:52.829 TEST_HEADER include/spdk/version.h 00:01:52.829 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:52.829 TEST_HEADER include/spdk/vhost.h 00:01:52.829 TEST_HEADER include/spdk/vmd.h 00:01:52.829 TEST_HEADER include/spdk/xor.h 00:01:52.829 TEST_HEADER include/spdk/zipf.h 00:01:52.829 CXX test/cpp_headers/accel_module.o 00:01:52.829 CXX test/cpp_headers/accel.o 00:01:52.829 CXX test/cpp_headers/assert.o 00:01:52.829 CXX test/cpp_headers/barrier.o 00:01:52.829 CXX test/cpp_headers/base64.o 00:01:52.829 CXX test/cpp_headers/bdev.o 00:01:52.829 CXX test/cpp_headers/bit_array.o 00:01:52.829 CXX test/cpp_headers/bdev_zone.o 00:01:52.829 CXX test/cpp_headers/bit_pool.o 00:01:52.829 CXX test/cpp_headers/bdev_module.o 00:01:52.829 CXX test/cpp_headers/blob_bdev.o 00:01:52.829 CXX test/cpp_headers/blobfs.o 00:01:52.829 CXX test/cpp_headers/blobfs_bdev.o 00:01:52.829 CXX test/cpp_headers/conf.o 00:01:52.829 CXX test/cpp_headers/blob.o 00:01:52.829 CXX test/cpp_headers/cpuset.o 00:01:52.829 CXX test/cpp_headers/config.o 00:01:52.829 CC examples/vmd/lsvmd/lsvmd.o 00:01:52.829 CXX test/cpp_headers/crc64.o 00:01:52.829 CXX test/cpp_headers/crc16.o 00:01:52.829 CXX test/cpp_headers/crc32.o 00:01:52.829 CXX test/cpp_headers/dif.o 00:01:52.829 CXX test/cpp_headers/dma.o 00:01:52.829 CXX test/cpp_headers/env_dpdk.o 00:01:52.829 CC examples/accel/perf/accel_perf.o 00:01:52.829 CXX test/cpp_headers/env.o 00:01:52.829 CXX test/cpp_headers/endian.o 00:01:52.829 CXX test/cpp_headers/fd_group.o 00:01:52.829 CXX test/cpp_headers/fd.o 00:01:52.829 CXX test/cpp_headers/event.o 00:01:52.829 CC examples/bdev/bdevperf/bdevperf.o 00:01:52.829 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:52.829 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:52.829 CC test/app/jsoncat/jsoncat.o 00:01:52.829 CC app/fio/nvme/fio_plugin.o 00:01:52.829 CXX test/cpp_headers/file.o 00:01:52.829 CC examples/nvme/hello_world/hello_world.o 00:01:52.829 CXX test/cpp_headers/ftl.o 00:01:52.829 CXX test/cpp_headers/hexlify.o 00:01:52.829 CXX test/cpp_headers/histogram_data.o 00:01:52.829 CXX test/cpp_headers/gpt_spec.o 00:01:52.829 CC examples/idxd/perf/perf.o 00:01:52.829 CXX test/cpp_headers/idxd.o 00:01:52.829 CC examples/util/zipf/zipf.o 00:01:52.829 CXX test/cpp_headers/init.o 00:01:52.829 CC examples/nvme/hotplug/hotplug.o 00:01:52.829 CC test/event/reactor/reactor.o 00:01:52.829 CXX test/cpp_headers/idxd_spec.o 00:01:52.829 CXX test/cpp_headers/ioat.o 00:01:52.829 CC examples/sock/hello_world/hello_sock.o 00:01:52.829 CXX test/cpp_headers/ioat_spec.o 00:01:52.829 CC test/event/reactor_perf/reactor_perf.o 00:01:52.829 CC examples/nvme/reconnect/reconnect.o 00:01:52.829 CC test/nvme/reset/reset.o 00:01:52.829 CC test/nvme/aer/aer.o 00:01:52.829 CXX test/cpp_headers/iscsi_spec.o 00:01:52.829 CC examples/nvme/abort/abort.o 00:01:52.829 CC examples/nvme/arbitration/arbitration.o 00:01:52.829 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:52.829 CC examples/vmd/led/led.o 00:01:52.829 CC examples/ioat/verify/verify.o 00:01:52.829 CXX test/cpp_headers/json.o 00:01:52.829 CXX test/cpp_headers/jsonrpc.o 00:01:52.829 CXX test/cpp_headers/likely.o 00:01:52.829 CXX test/cpp_headers/log.o 00:01:52.829 CC test/app/histogram_perf/histogram_perf.o 00:01:52.829 CXX test/cpp_headers/lvol.o 00:01:52.829 CC test/app/stub/stub.o 00:01:52.829 CXX test/cpp_headers/memory.o 00:01:52.829 CC test/nvme/connect_stress/connect_stress.o 00:01:52.829 CXX test/cpp_headers/mmio.o 00:01:52.829 CC test/nvme/boot_partition/boot_partition.o 00:01:52.829 CXX test/cpp_headers/notify.o 00:01:52.829 CXX test/cpp_headers/nbd.o 00:01:52.829 CXX test/cpp_headers/nvme.o 00:01:52.829 CC examples/nvmf/nvmf/nvmf.o 00:01:52.829 CC test/thread/poller_perf/poller_perf.o 00:01:52.829 CXX test/cpp_headers/nvme_intel.o 00:01:52.829 CC test/env/pci/pci_ut.o 00:01:52.829 CXX test/cpp_headers/nvme_ocssd.o 00:01:52.829 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:52.829 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:52.829 CC test/nvme/err_injection/err_injection.o 00:01:52.829 CC test/accel/dif/dif.o 00:01:52.829 CC test/nvme/cuse/cuse.o 00:01:52.829 CC test/nvme/sgl/sgl.o 00:01:52.829 CC test/event/app_repeat/app_repeat.o 00:01:52.829 CC test/nvme/e2edp/nvme_dp.o 00:01:52.829 CC test/event/event_perf/event_perf.o 00:01:52.829 CXX test/cpp_headers/nvme_spec.o 00:01:52.829 CC test/nvme/overhead/overhead.o 00:01:52.829 CXX test/cpp_headers/nvme_zns.o 00:01:52.829 CC examples/bdev/hello_world/hello_bdev.o 00:01:52.829 CXX test/cpp_headers/nvmf_cmd.o 00:01:52.829 CC examples/ioat/perf/perf.o 00:01:52.829 CC examples/thread/thread/thread_ex.o 00:01:52.829 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:52.829 CC test/nvme/simple_copy/simple_copy.o 00:01:52.829 CC test/nvme/fused_ordering/fused_ordering.o 00:01:52.829 CC test/nvme/startup/startup.o 00:01:52.829 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:52.829 CC test/nvme/reserve/reserve.o 00:01:52.829 CXX test/cpp_headers/nvmf.o 00:01:52.829 CXX test/cpp_headers/nvmf_spec.o 00:01:52.829 CC examples/blob/hello_world/hello_blob.o 00:01:52.829 CXX test/cpp_headers/nvmf_transport.o 00:01:52.829 CXX test/cpp_headers/opal.o 00:01:52.829 CC app/fio/bdev/fio_plugin.o 00:01:52.829 CXX test/cpp_headers/opal_spec.o 00:01:52.829 CC test/bdev/bdevio/bdevio.o 00:01:52.829 CC examples/blob/cli/blobcli.o 00:01:52.829 CXX test/cpp_headers/queue.o 00:01:52.829 CC test/env/memory/memory_ut.o 00:01:52.829 CC test/dma/test_dma/test_dma.o 00:01:52.829 CXX test/cpp_headers/pipe.o 00:01:52.829 CXX test/cpp_headers/reduce.o 00:01:52.829 CXX test/cpp_headers/pci_ids.o 00:01:52.829 CC test/env/vtophys/vtophys.o 00:01:52.829 CC test/nvme/compliance/nvme_compliance.o 00:01:52.829 CC test/event/scheduler/scheduler.o 00:01:52.829 CXX test/cpp_headers/scheduler.o 00:01:52.829 CXX test/cpp_headers/rpc.o 00:01:52.829 CXX test/cpp_headers/scsi.o 00:01:52.829 CC test/app/bdev_svc/bdev_svc.o 00:01:52.829 CC test/nvme/fdp/fdp.o 00:01:52.829 CC test/blobfs/mkfs/mkfs.o 00:01:53.097 CXX test/cpp_headers/scsi_spec.o 00:01:53.097 LINK spdk_lspci 00:01:53.097 CXX test/cpp_headers/sock.o 00:01:53.379 LINK nvmf_tgt 00:01:53.379 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:53.379 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:53.379 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:53.379 CC test/lvol/esnap/esnap.o 00:01:53.379 CC test/env/mem_callbacks/mem_callbacks.o 00:01:53.379 LINK interrupt_tgt 00:01:53.379 LINK rpc_client_test 00:01:53.379 LINK vhost 00:01:53.379 LINK iscsi_tgt 00:01:53.379 LINK spdk_tgt 00:01:53.379 LINK spdk_nvme_discover 00:01:53.651 LINK lsvmd 00:01:53.651 LINK spdk_trace_record 00:01:53.651 LINK jsoncat 00:01:53.651 LINK led 00:01:53.651 LINK reactor 00:01:53.651 LINK histogram_perf 00:01:53.651 LINK zipf 00:01:53.651 LINK env_dpdk_post_init 00:01:53.915 LINK reactor_perf 00:01:53.915 LINK event_perf 00:01:53.915 LINK vtophys 00:01:53.915 LINK err_injection 00:01:53.915 LINK app_repeat 00:01:53.915 LINK poller_perf 00:01:53.915 LINK connect_stress 00:01:53.915 LINK boot_partition 00:01:53.915 LINK pmr_persistence 00:01:53.915 LINK doorbell_aers 00:01:53.915 LINK stub 00:01:53.915 LINK bdev_svc 00:01:53.915 LINK spdk_dd 00:01:53.915 LINK ioat_perf 00:01:53.915 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:53.915 LINK cmb_copy 00:01:53.915 LINK startup 00:01:53.915 LINK verify 00:01:53.915 LINK simple_copy 00:01:53.915 LINK hello_world 00:01:53.915 CXX test/cpp_headers/stdinc.o 00:01:53.915 LINK hello_bdev 00:01:53.915 LINK scheduler 00:01:53.915 LINK mkfs 00:01:53.915 LINK fused_ordering 00:01:54.175 CXX test/cpp_headers/string.o 00:01:54.175 LINK hello_blob 00:01:54.175 LINK thread 00:01:54.175 LINK nvme_dp 00:01:54.175 LINK reset 00:01:54.175 CXX test/cpp_headers/thread.o 00:01:54.175 CXX test/cpp_headers/trace.o 00:01:54.175 CXX test/cpp_headers/trace_parser.o 00:01:54.175 CXX test/cpp_headers/tree.o 00:01:54.175 LINK reserve 00:01:54.175 CXX test/cpp_headers/ublk.o 00:01:54.175 CXX test/cpp_headers/util.o 00:01:54.175 LINK hotplug 00:01:54.175 LINK aer 00:01:54.175 LINK sgl 00:01:54.175 CXX test/cpp_headers/uuid.o 00:01:54.175 LINK hello_sock 00:01:54.175 CXX test/cpp_headers/vfio_user_pci.o 00:01:54.175 CXX test/cpp_headers/version.o 00:01:54.175 CXX test/cpp_headers/vfio_user_spec.o 00:01:54.175 CXX test/cpp_headers/vhost.o 00:01:54.175 CXX test/cpp_headers/vmd.o 00:01:54.175 CXX test/cpp_headers/xor.o 00:01:54.175 CXX test/cpp_headers/zipf.o 00:01:54.175 LINK nvmf 00:01:54.175 LINK overhead 00:01:54.175 LINK idxd_perf 00:01:54.175 LINK arbitration 00:01:54.175 LINK reconnect 00:01:54.175 LINK nvme_compliance 00:01:54.175 LINK fdp 00:01:54.175 LINK abort 00:01:54.436 LINK test_dma 00:01:54.436 LINK spdk_trace 00:01:54.436 LINK dif 00:01:54.436 LINK bdevio 00:01:54.436 LINK pci_ut 00:01:54.436 LINK accel_perf 00:01:54.436 LINK spdk_nvme 00:01:54.436 LINK nvme_manage 00:01:54.436 LINK spdk_bdev 00:01:54.436 LINK blobcli 00:01:54.436 LINK nvme_fuzz 00:01:54.436 LINK vhost_fuzz 00:01:54.697 LINK mem_callbacks 00:01:54.697 LINK spdk_nvme_perf 00:01:54.697 LINK spdk_top 00:01:54.697 LINK bdevperf 00:01:54.697 LINK spdk_nvme_identify 00:01:54.697 LINK memory_ut 00:01:54.959 LINK cuse 00:01:55.530 LINK iscsi_fuzz 00:01:58.081 LINK esnap 00:01:58.081 00:01:58.081 real 0m48.353s 00:01:58.081 user 6m43.999s 00:01:58.081 sys 5m29.571s 00:01:58.081 05:17:01 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:58.081 05:17:01 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.081 ************************************ 00:01:58.081 END TEST make 00:01:58.081 ************************************ 00:01:58.081 05:17:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:01:58.081 05:17:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:01:58.081 05:17:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:01:58.081 05:17:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:01:58.081 05:17:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:01:58.081 05:17:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:01:58.081 05:17:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:01:58.081 05:17:01 -- scripts/common.sh@335 -- # IFS=.-: 00:01:58.081 05:17:01 -- scripts/common.sh@335 -- # read -ra ver1 00:01:58.081 05:17:01 -- scripts/common.sh@336 -- # IFS=.-: 00:01:58.081 05:17:01 -- scripts/common.sh@336 -- # read -ra ver2 00:01:58.081 05:17:01 -- scripts/common.sh@337 -- # local 'op=<' 00:01:58.081 05:17:01 -- scripts/common.sh@339 -- # ver1_l=2 00:01:58.081 05:17:01 -- scripts/common.sh@340 -- # ver2_l=1 00:01:58.081 05:17:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:01:58.081 05:17:01 -- scripts/common.sh@343 -- # case "$op" in 00:01:58.081 05:17:01 -- scripts/common.sh@344 -- # : 1 00:01:58.081 05:17:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:01:58.081 05:17:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:58.081 05:17:01 -- scripts/common.sh@364 -- # decimal 1 00:01:58.081 05:17:01 -- scripts/common.sh@352 -- # local d=1 00:01:58.081 05:17:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:01:58.082 05:17:01 -- scripts/common.sh@354 -- # echo 1 00:01:58.082 05:17:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:01:58.082 05:17:01 -- scripts/common.sh@365 -- # decimal 2 00:01:58.082 05:17:01 -- scripts/common.sh@352 -- # local d=2 00:01:58.082 05:17:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:01:58.082 05:17:01 -- scripts/common.sh@354 -- # echo 2 00:01:58.082 05:17:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:01:58.082 05:17:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:01:58.082 05:17:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:01:58.082 05:17:01 -- scripts/common.sh@367 -- # return 0 00:01:58.082 05:17:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:01:58.082 05:17:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:01:58.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:01:58.082 --rc genhtml_branch_coverage=1 00:01:58.082 --rc genhtml_function_coverage=1 00:01:58.082 --rc genhtml_legend=1 00:01:58.082 --rc geninfo_all_blocks=1 00:01:58.082 --rc geninfo_unexecuted_blocks=1 00:01:58.082 00:01:58.082 ' 00:01:58.082 05:17:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:01:58.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:01:58.082 --rc genhtml_branch_coverage=1 00:01:58.082 --rc genhtml_function_coverage=1 00:01:58.082 --rc genhtml_legend=1 00:01:58.082 --rc geninfo_all_blocks=1 00:01:58.082 --rc geninfo_unexecuted_blocks=1 00:01:58.082 00:01:58.082 ' 00:01:58.082 05:17:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:01:58.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:01:58.082 --rc genhtml_branch_coverage=1 00:01:58.082 --rc genhtml_function_coverage=1 00:01:58.082 --rc genhtml_legend=1 00:01:58.082 --rc geninfo_all_blocks=1 00:01:58.082 --rc geninfo_unexecuted_blocks=1 00:01:58.082 00:01:58.082 ' 00:01:58.082 05:17:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:01:58.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:01:58.082 --rc genhtml_branch_coverage=1 00:01:58.082 --rc genhtml_function_coverage=1 00:01:58.082 --rc genhtml_legend=1 00:01:58.082 --rc geninfo_all_blocks=1 00:01:58.082 --rc geninfo_unexecuted_blocks=1 00:01:58.082 00:01:58.082 ' 00:01:58.082 05:17:01 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:58.082 05:17:01 -- nvmf/common.sh@7 -- # uname -s 00:01:58.082 05:17:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:58.082 05:17:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:58.082 05:17:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:58.082 05:17:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:58.082 05:17:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:58.082 05:17:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:58.082 05:17:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:58.082 05:17:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:58.082 05:17:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:58.082 05:17:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:58.082 05:17:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:01:58.082 05:17:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:01:58.082 05:17:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:58.082 05:17:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:58.082 05:17:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:58.082 05:17:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:58.082 05:17:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:58.082 05:17:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:58.082 05:17:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:58.082 05:17:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.082 05:17:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.082 05:17:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.082 05:17:01 -- paths/export.sh@5 -- # export PATH 00:01:58.082 05:17:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.082 05:17:01 -- nvmf/common.sh@46 -- # : 0 00:01:58.082 05:17:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:01:58.082 05:17:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:01:58.082 05:17:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:01:58.082 05:17:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:58.082 05:17:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:58.082 05:17:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:01:58.082 05:17:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:01:58.082 05:17:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:01:58.082 05:17:01 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:58.082 05:17:01 -- spdk/autotest.sh@32 -- # uname -s 00:01:58.082 05:17:01 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:58.082 05:17:01 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:58.082 05:17:01 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:58.082 05:17:01 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:58.082 05:17:01 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:58.082 05:17:01 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:58.082 05:17:01 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:58.082 05:17:01 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:58.082 05:17:01 -- spdk/autotest.sh@48 -- # udevadm_pid=1539525 00:01:58.082 05:17:01 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:01:58.082 05:17:01 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:58.082 05:17:01 -- spdk/autotest.sh@54 -- # echo 1539527 00:01:58.082 05:17:01 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:01:58.082 05:17:01 -- spdk/autotest.sh@56 -- # echo 1539528 00:01:58.082 05:17:01 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:01:58.082 05:17:01 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:01:58.082 05:17:01 -- spdk/autotest.sh@60 -- # echo 1539529 00:01:58.082 05:17:01 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:01:58.082 05:17:01 -- spdk/autotest.sh@62 -- # echo 1539530 00:01:58.082 05:17:01 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:58.082 05:17:01 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:01:58.082 05:17:01 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:01:58.082 05:17:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:01:58.082 05:17:01 -- common/autotest_common.sh@10 -- # set +x 00:01:58.422 05:17:01 -- spdk/autotest.sh@70 -- # create_test_list 00:01:58.422 05:17:01 -- common/autotest_common.sh@746 -- # xtrace_disable 00:01:58.422 05:17:01 -- common/autotest_common.sh@10 -- # set +x 00:01:58.422 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:01:58.423 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:01:58.423 05:17:01 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:58.423 05:17:01 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:58.423 05:17:01 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:58.423 05:17:01 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:58.423 05:17:01 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:58.423 05:17:01 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:01:58.423 05:17:01 -- common/autotest_common.sh@1450 -- # uname 00:01:58.423 05:17:01 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:01:58.423 05:17:01 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:01:58.423 05:17:01 -- common/autotest_common.sh@1470 -- # uname 00:01:58.423 05:17:01 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:01:58.423 05:17:01 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:01:58.423 05:17:01 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:01:58.423 lcov: LCOV version 1.15 00:01:58.423 05:17:01 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:01.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:01.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:01.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:01.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:01.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:01.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:27.629 05:17:28 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:02:27.629 05:17:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:27.629 05:17:28 -- common/autotest_common.sh@10 -- # set +x 00:02:27.629 05:17:28 -- spdk/autotest.sh@89 -- # rm -f 00:02:27.629 05:17:28 -- spdk/autotest.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:28.570 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:02:28.570 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:02:28.570 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:02:28.570 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:02:28.570 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:02:28.570 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:02:28.570 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:02:28.570 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:02:28.830 0000:65:00.0 (144d a80a): Already using the nvme driver 00:02:28.830 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:02:28.830 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:02:28.830 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:02:28.830 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:02:28.830 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:02:28.830 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:02:28.830 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:02:28.830 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:02:29.090 05:17:32 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:02:29.090 05:17:32 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:02:29.090 05:17:32 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:02:29.090 05:17:32 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:02:29.090 05:17:32 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:02:29.090 05:17:32 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:02:29.090 05:17:32 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:02:29.090 05:17:32 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:29.090 05:17:32 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:02:29.090 05:17:32 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:02:29.090 05:17:32 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 00:02:29.090 05:17:32 -- spdk/autotest.sh@108 -- # grep -v p 00:02:29.090 05:17:32 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:29.090 05:17:32 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:02:29.090 05:17:32 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:02:29.090 05:17:32 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:02:29.090 05:17:32 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:29.350 No valid GPT data, bailing 00:02:29.351 05:17:32 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:29.351 05:17:32 -- scripts/common.sh@393 -- # pt= 00:02:29.351 05:17:32 -- scripts/common.sh@394 -- # return 1 00:02:29.351 05:17:32 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:29.351 1+0 records in 00:02:29.351 1+0 records out 00:02:29.351 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00437052 s, 240 MB/s 00:02:29.351 05:17:32 -- spdk/autotest.sh@116 -- # sync 00:02:29.351 05:17:32 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:29.351 05:17:32 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:29.351 05:17:32 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:37.488 05:17:40 -- spdk/autotest.sh@122 -- # uname -s 00:02:37.488 05:17:40 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:02:37.488 05:17:40 -- spdk/autotest.sh@123 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:37.488 05:17:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:37.488 05:17:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:37.488 05:17:40 -- common/autotest_common.sh@10 -- # set +x 00:02:37.488 ************************************ 00:02:37.488 START TEST setup.sh 00:02:37.488 ************************************ 00:02:37.488 05:17:40 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:37.750 * Looking for test storage... 00:02:37.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:37.750 05:17:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:02:37.750 05:17:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:02:37.750 05:17:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:02:37.750 05:17:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:02:37.750 05:17:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:02:37.750 05:17:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:02:37.750 05:17:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:02:37.750 05:17:40 -- scripts/common.sh@335 -- # IFS=.-: 00:02:37.750 05:17:40 -- scripts/common.sh@335 -- # read -ra ver1 00:02:37.750 05:17:40 -- scripts/common.sh@336 -- # IFS=.-: 00:02:37.750 05:17:40 -- scripts/common.sh@336 -- # read -ra ver2 00:02:37.750 05:17:40 -- scripts/common.sh@337 -- # local 'op=<' 00:02:37.750 05:17:40 -- scripts/common.sh@339 -- # ver1_l=2 00:02:37.750 05:17:40 -- scripts/common.sh@340 -- # ver2_l=1 00:02:37.750 05:17:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:02:37.750 05:17:40 -- scripts/common.sh@343 -- # case "$op" in 00:02:37.750 05:17:40 -- scripts/common.sh@344 -- # : 1 00:02:37.750 05:17:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:02:37.750 05:17:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:37.750 05:17:40 -- scripts/common.sh@364 -- # decimal 1 00:02:37.750 05:17:40 -- scripts/common.sh@352 -- # local d=1 00:02:37.750 05:17:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:37.750 05:17:40 -- scripts/common.sh@354 -- # echo 1 00:02:37.750 05:17:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:02:37.750 05:17:40 -- scripts/common.sh@365 -- # decimal 2 00:02:37.751 05:17:40 -- scripts/common.sh@352 -- # local d=2 00:02:37.751 05:17:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:37.751 05:17:40 -- scripts/common.sh@354 -- # echo 2 00:02:37.751 05:17:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:02:37.751 05:17:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:02:37.751 05:17:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:02:37.751 05:17:40 -- scripts/common.sh@367 -- # return 0 00:02:37.751 05:17:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:37.751 05:17:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:02:37.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:37.751 --rc genhtml_branch_coverage=1 00:02:37.751 --rc genhtml_function_coverage=1 00:02:37.751 --rc genhtml_legend=1 00:02:37.751 --rc geninfo_all_blocks=1 00:02:37.751 --rc geninfo_unexecuted_blocks=1 00:02:37.751 00:02:37.751 ' 00:02:37.751 05:17:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:02:37.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:37.751 --rc genhtml_branch_coverage=1 00:02:37.751 --rc genhtml_function_coverage=1 00:02:37.751 --rc genhtml_legend=1 00:02:37.751 --rc geninfo_all_blocks=1 00:02:37.751 --rc geninfo_unexecuted_blocks=1 00:02:37.751 00:02:37.751 ' 00:02:37.751 05:17:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:02:37.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:37.751 --rc genhtml_branch_coverage=1 00:02:37.751 --rc genhtml_function_coverage=1 00:02:37.751 --rc genhtml_legend=1 00:02:37.751 --rc geninfo_all_blocks=1 00:02:37.751 --rc geninfo_unexecuted_blocks=1 00:02:37.751 00:02:37.751 ' 00:02:37.751 05:17:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:02:37.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:37.751 --rc genhtml_branch_coverage=1 00:02:37.751 --rc genhtml_function_coverage=1 00:02:37.751 --rc genhtml_legend=1 00:02:37.751 --rc geninfo_all_blocks=1 00:02:37.751 --rc geninfo_unexecuted_blocks=1 00:02:37.751 00:02:37.751 ' 00:02:37.751 05:17:40 -- setup/test-setup.sh@10 -- # uname -s 00:02:37.751 05:17:40 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:37.751 05:17:40 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:37.751 05:17:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:37.751 05:17:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:37.751 05:17:40 -- common/autotest_common.sh@10 -- # set +x 00:02:37.751 ************************************ 00:02:37.751 START TEST acl 00:02:37.751 ************************************ 00:02:37.751 05:17:40 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:37.751 * Looking for test storage... 00:02:37.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:37.751 05:17:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:02:37.751 05:17:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:02:37.751 05:17:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:02:38.013 05:17:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:02:38.013 05:17:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:02:38.013 05:17:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:02:38.013 05:17:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:02:38.013 05:17:41 -- scripts/common.sh@335 -- # IFS=.-: 00:02:38.013 05:17:41 -- scripts/common.sh@335 -- # read -ra ver1 00:02:38.013 05:17:41 -- scripts/common.sh@336 -- # IFS=.-: 00:02:38.013 05:17:41 -- scripts/common.sh@336 -- # read -ra ver2 00:02:38.013 05:17:41 -- scripts/common.sh@337 -- # local 'op=<' 00:02:38.013 05:17:41 -- scripts/common.sh@339 -- # ver1_l=2 00:02:38.013 05:17:41 -- scripts/common.sh@340 -- # ver2_l=1 00:02:38.013 05:17:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:02:38.013 05:17:41 -- scripts/common.sh@343 -- # case "$op" in 00:02:38.013 05:17:41 -- scripts/common.sh@344 -- # : 1 00:02:38.013 05:17:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:02:38.013 05:17:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:38.013 05:17:41 -- scripts/common.sh@364 -- # decimal 1 00:02:38.013 05:17:41 -- scripts/common.sh@352 -- # local d=1 00:02:38.013 05:17:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:38.013 05:17:41 -- scripts/common.sh@354 -- # echo 1 00:02:38.013 05:17:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:02:38.013 05:17:41 -- scripts/common.sh@365 -- # decimal 2 00:02:38.013 05:17:41 -- scripts/common.sh@352 -- # local d=2 00:02:38.013 05:17:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:38.013 05:17:41 -- scripts/common.sh@354 -- # echo 2 00:02:38.013 05:17:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:02:38.013 05:17:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:02:38.013 05:17:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:02:38.013 05:17:41 -- scripts/common.sh@367 -- # return 0 00:02:38.013 05:17:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:38.013 05:17:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:02:38.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:38.013 --rc genhtml_branch_coverage=1 00:02:38.013 --rc genhtml_function_coverage=1 00:02:38.013 --rc genhtml_legend=1 00:02:38.013 --rc geninfo_all_blocks=1 00:02:38.013 --rc geninfo_unexecuted_blocks=1 00:02:38.013 00:02:38.013 ' 00:02:38.013 05:17:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:02:38.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:38.013 --rc genhtml_branch_coverage=1 00:02:38.013 --rc genhtml_function_coverage=1 00:02:38.013 --rc genhtml_legend=1 00:02:38.013 --rc geninfo_all_blocks=1 00:02:38.013 --rc geninfo_unexecuted_blocks=1 00:02:38.013 00:02:38.013 ' 00:02:38.013 05:17:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:02:38.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:38.013 --rc genhtml_branch_coverage=1 00:02:38.013 --rc genhtml_function_coverage=1 00:02:38.013 --rc genhtml_legend=1 00:02:38.013 --rc geninfo_all_blocks=1 00:02:38.013 --rc geninfo_unexecuted_blocks=1 00:02:38.013 00:02:38.013 ' 00:02:38.013 05:17:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:02:38.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:38.013 --rc genhtml_branch_coverage=1 00:02:38.013 --rc genhtml_function_coverage=1 00:02:38.013 --rc genhtml_legend=1 00:02:38.013 --rc geninfo_all_blocks=1 00:02:38.013 --rc geninfo_unexecuted_blocks=1 00:02:38.013 00:02:38.013 ' 00:02:38.013 05:17:41 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:38.013 05:17:41 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:02:38.013 05:17:41 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:02:38.013 05:17:41 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:02:38.013 05:17:41 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:02:38.013 05:17:41 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:02:38.013 05:17:41 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:02:38.014 05:17:41 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:38.014 05:17:41 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:02:38.014 05:17:41 -- setup/acl.sh@12 -- # devs=() 00:02:38.014 05:17:41 -- setup/acl.sh@12 -- # declare -a devs 00:02:38.014 05:17:41 -- setup/acl.sh@13 -- # drivers=() 00:02:38.014 05:17:41 -- setup/acl.sh@13 -- # declare -A drivers 00:02:38.014 05:17:41 -- setup/acl.sh@51 -- # setup reset 00:02:38.014 05:17:41 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:38.014 05:17:41 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:42.225 05:17:45 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:42.225 05:17:45 -- setup/acl.sh@16 -- # local dev driver 00:02:42.225 05:17:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:42.225 05:17:45 -- setup/acl.sh@15 -- # setup output status 00:02:42.225 05:17:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:42.225 05:17:45 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:45.529 Hugepages 00:02:45.529 node hugesize free / total 00:02:45.791 05:17:48 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:45.791 05:17:48 -- setup/acl.sh@19 -- # continue 00:02:45.791 05:17:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.791 05:17:48 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:45.791 05:17:48 -- setup/acl.sh@19 -- # continue 00:02:45.791 05:17:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.791 05:17:48 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:45.791 05:17:48 -- setup/acl.sh@19 -- # continue 00:02:45.791 05:17:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.791 00:02:45.791 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:45.791 05:17:48 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:45.791 05:17:48 -- setup/acl.sh@19 -- # continue 00:02:45.791 05:17:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.791 05:17:48 -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:02:45.791 05:17:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.791 05:17:48 -- setup/acl.sh@20 -- # continue 00:02:45.791 05:17:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.791 05:17:48 -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:02:45.791 05:17:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.791 05:17:48 -- setup/acl.sh@20 -- # continue 00:02:45.791 05:17:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.791 05:17:48 -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:02:45.791 05:17:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.791 05:17:48 -- setup/acl.sh@20 -- # continue 00:02:45.791 05:17:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.791 05:17:48 -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:02:45.791 05:17:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.791 05:17:48 -- setup/acl.sh@20 -- # continue 00:02:45.791 05:17:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.791 05:17:48 -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:02:45.791 05:17:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.791 05:17:48 -- setup/acl.sh@20 -- # continue 00:02:45.791 05:17:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.791 05:17:48 -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:02:45.791 05:17:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.791 05:17:48 -- setup/acl.sh@20 -- # continue 00:02:45.791 05:17:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.791 05:17:48 -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:02:45.791 05:17:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.791 05:17:48 -- setup/acl.sh@20 -- # continue 00:02:45.791 05:17:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.791 05:17:48 -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:02:45.791 05:17:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.791 05:17:48 -- setup/acl.sh@20 -- # continue 00:02:45.791 05:17:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.791 05:17:48 -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:02:45.791 05:17:48 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:45.791 05:17:48 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:02:45.791 05:17:48 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:45.791 05:17:48 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:45.791 05:17:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.791 05:17:48 -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:02:45.791 05:17:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.791 05:17:48 -- setup/acl.sh@20 -- # continue 00:02:45.791 05:17:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.791 05:17:48 -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:02:45.791 05:17:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.791 05:17:48 -- setup/acl.sh@20 -- # continue 00:02:45.791 05:17:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.791 05:17:48 -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:02:45.791 05:17:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.791 05:17:48 -- setup/acl.sh@20 -- # continue 00:02:45.791 05:17:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.791 05:17:48 -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:02:45.791 05:17:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.791 05:17:48 -- setup/acl.sh@20 -- # continue 00:02:45.791 05:17:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.791 05:17:48 -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:02:45.791 05:17:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.791 05:17:48 -- setup/acl.sh@20 -- # continue 00:02:45.791 05:17:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.792 05:17:48 -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:02:45.792 05:17:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.792 05:17:48 -- setup/acl.sh@20 -- # continue 00:02:45.792 05:17:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.792 05:17:49 -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:02:45.792 05:17:49 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.792 05:17:49 -- setup/acl.sh@20 -- # continue 00:02:45.792 05:17:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.792 05:17:49 -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:02:45.792 05:17:49 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.792 05:17:49 -- setup/acl.sh@20 -- # continue 00:02:45.792 05:17:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.792 05:17:49 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:45.792 05:17:49 -- setup/acl.sh@54 -- # run_test denied denied 00:02:45.792 05:17:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:45.792 05:17:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:45.792 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:02:45.792 ************************************ 00:02:45.792 START TEST denied 00:02:45.792 ************************************ 00:02:45.792 05:17:49 -- common/autotest_common.sh@1114 -- # denied 00:02:45.792 05:17:49 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:02:45.792 05:17:49 -- setup/acl.sh@38 -- # setup output config 00:02:45.792 05:17:49 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:02:45.792 05:17:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:45.792 05:17:49 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:49.997 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:02:49.997 05:17:53 -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:02:49.997 05:17:53 -- setup/acl.sh@28 -- # local dev driver 00:02:49.997 05:17:53 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:49.997 05:17:53 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:02:49.997 05:17:53 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:02:49.997 05:17:53 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:49.997 05:17:53 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:49.997 05:17:53 -- setup/acl.sh@41 -- # setup reset 00:02:49.997 05:17:53 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:49.997 05:17:53 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:55.282 00:02:55.282 real 0m9.421s 00:02:55.282 user 0m3.095s 00:02:55.282 sys 0m5.566s 00:02:55.282 05:17:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:02:55.282 05:17:58 -- common/autotest_common.sh@10 -- # set +x 00:02:55.282 ************************************ 00:02:55.282 END TEST denied 00:02:55.282 ************************************ 00:02:55.283 05:17:58 -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:55.283 05:17:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:55.283 05:17:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:55.283 05:17:58 -- common/autotest_common.sh@10 -- # set +x 00:02:55.283 ************************************ 00:02:55.283 START TEST allowed 00:02:55.283 ************************************ 00:02:55.283 05:17:58 -- common/autotest_common.sh@1114 -- # allowed 00:02:55.283 05:17:58 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:02:55.283 05:17:58 -- setup/acl.sh@45 -- # setup output config 00:02:55.283 05:17:58 -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:02:55.283 05:17:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:55.283 05:17:58 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:01.870 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:01.870 05:18:04 -- setup/acl.sh@47 -- # verify 00:03:01.870 05:18:04 -- setup/acl.sh@28 -- # local dev driver 00:03:01.870 05:18:04 -- setup/acl.sh@48 -- # setup reset 00:03:01.870 05:18:04 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:01.870 05:18:04 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:06.155 00:03:06.155 real 0m10.270s 00:03:06.155 user 0m3.033s 00:03:06.155 sys 0m5.565s 00:03:06.155 05:18:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:06.155 05:18:08 -- common/autotest_common.sh@10 -- # set +x 00:03:06.155 ************************************ 00:03:06.155 END TEST allowed 00:03:06.155 ************************************ 00:03:06.155 00:03:06.155 real 0m27.936s 00:03:06.155 user 0m9.151s 00:03:06.155 sys 0m16.588s 00:03:06.155 05:18:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:06.155 05:18:08 -- common/autotest_common.sh@10 -- # set +x 00:03:06.155 ************************************ 00:03:06.155 END TEST acl 00:03:06.155 ************************************ 00:03:06.155 05:18:08 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:06.155 05:18:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:06.155 05:18:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:06.155 05:18:08 -- common/autotest_common.sh@10 -- # set +x 00:03:06.155 ************************************ 00:03:06.155 START TEST hugepages 00:03:06.155 ************************************ 00:03:06.155 05:18:08 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:06.155 * Looking for test storage... 00:03:06.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:06.155 05:18:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:06.155 05:18:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:06.155 05:18:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:06.155 05:18:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:06.155 05:18:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:06.155 05:18:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:06.155 05:18:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:06.155 05:18:09 -- scripts/common.sh@335 -- # IFS=.-: 00:03:06.155 05:18:09 -- scripts/common.sh@335 -- # read -ra ver1 00:03:06.155 05:18:09 -- scripts/common.sh@336 -- # IFS=.-: 00:03:06.155 05:18:09 -- scripts/common.sh@336 -- # read -ra ver2 00:03:06.155 05:18:09 -- scripts/common.sh@337 -- # local 'op=<' 00:03:06.155 05:18:09 -- scripts/common.sh@339 -- # ver1_l=2 00:03:06.155 05:18:09 -- scripts/common.sh@340 -- # ver2_l=1 00:03:06.155 05:18:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:06.155 05:18:09 -- scripts/common.sh@343 -- # case "$op" in 00:03:06.155 05:18:09 -- scripts/common.sh@344 -- # : 1 00:03:06.155 05:18:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:06.155 05:18:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:06.155 05:18:09 -- scripts/common.sh@364 -- # decimal 1 00:03:06.155 05:18:09 -- scripts/common.sh@352 -- # local d=1 00:03:06.155 05:18:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:06.155 05:18:09 -- scripts/common.sh@354 -- # echo 1 00:03:06.155 05:18:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:06.155 05:18:09 -- scripts/common.sh@365 -- # decimal 2 00:03:06.155 05:18:09 -- scripts/common.sh@352 -- # local d=2 00:03:06.155 05:18:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:06.155 05:18:09 -- scripts/common.sh@354 -- # echo 2 00:03:06.155 05:18:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:06.155 05:18:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:06.155 05:18:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:06.155 05:18:09 -- scripts/common.sh@367 -- # return 0 00:03:06.155 05:18:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:06.155 05:18:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:06.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:06.155 --rc genhtml_branch_coverage=1 00:03:06.155 --rc genhtml_function_coverage=1 00:03:06.155 --rc genhtml_legend=1 00:03:06.155 --rc geninfo_all_blocks=1 00:03:06.155 --rc geninfo_unexecuted_blocks=1 00:03:06.155 00:03:06.155 ' 00:03:06.155 05:18:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:06.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:06.155 --rc genhtml_branch_coverage=1 00:03:06.155 --rc genhtml_function_coverage=1 00:03:06.155 --rc genhtml_legend=1 00:03:06.155 --rc geninfo_all_blocks=1 00:03:06.155 --rc geninfo_unexecuted_blocks=1 00:03:06.155 00:03:06.155 ' 00:03:06.155 05:18:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:06.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:06.155 --rc genhtml_branch_coverage=1 00:03:06.155 --rc genhtml_function_coverage=1 00:03:06.155 --rc genhtml_legend=1 00:03:06.155 --rc geninfo_all_blocks=1 00:03:06.155 --rc geninfo_unexecuted_blocks=1 00:03:06.155 00:03:06.155 ' 00:03:06.155 05:18:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:06.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:06.155 --rc genhtml_branch_coverage=1 00:03:06.155 --rc genhtml_function_coverage=1 00:03:06.155 --rc genhtml_legend=1 00:03:06.155 --rc geninfo_all_blocks=1 00:03:06.155 --rc geninfo_unexecuted_blocks=1 00:03:06.155 00:03:06.155 ' 00:03:06.155 05:18:09 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:06.155 05:18:09 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:06.155 05:18:09 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:06.155 05:18:09 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:06.155 05:18:09 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:06.155 05:18:09 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:06.155 05:18:09 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:06.155 05:18:09 -- setup/common.sh@18 -- # local node= 00:03:06.155 05:18:09 -- setup/common.sh@19 -- # local var val 00:03:06.155 05:18:09 -- setup/common.sh@20 -- # local mem_f mem 00:03:06.155 05:18:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.155 05:18:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.155 05:18:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.155 05:18:09 -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.155 05:18:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.155 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.156 05:18:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324764 kB' 'MemFree: 107375920 kB' 'MemAvailable: 110843240 kB' 'Buffers: 5168 kB' 'Cached: 9817696 kB' 'SwapCached: 0 kB' 'Active: 6605596 kB' 'Inactive: 3765728 kB' 'Active(anon): 6210252 kB' 'Inactive(anon): 0 kB' 'Active(file): 395344 kB' 'Inactive(file): 3765728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551912 kB' 'Mapped: 182904 kB' 'Shmem: 5661792 kB' 'KReclaimable: 261012 kB' 'Slab: 1350004 kB' 'SReclaimable: 261012 kB' 'SUnreclaim: 1088992 kB' 'KernelStack: 27168 kB' 'PageTables: 8432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69453832 kB' 'Committed_AS: 7477728 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235540 kB' 'VmallocChunk: 0 kB' 'Percpu: 107712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3108212 kB' 'DirectMap2M: 13348864 kB' 'DirectMap1G: 120586240 kB' 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.156 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.156 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # continue 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.157 05:18:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.157 05:18:09 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.157 05:18:09 -- setup/common.sh@33 -- # echo 2048 00:03:06.157 05:18:09 -- setup/common.sh@33 -- # return 0 00:03:06.157 05:18:09 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:06.157 05:18:09 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:06.157 05:18:09 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:06.157 05:18:09 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:06.157 05:18:09 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:06.157 05:18:09 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:06.157 05:18:09 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:06.157 05:18:09 -- setup/hugepages.sh@207 -- # get_nodes 00:03:06.157 05:18:09 -- setup/hugepages.sh@27 -- # local node 00:03:06.157 05:18:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.157 05:18:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:06.157 05:18:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.157 05:18:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:06.157 05:18:09 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:06.157 05:18:09 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:06.157 05:18:09 -- setup/hugepages.sh@208 -- # clear_hp 00:03:06.157 05:18:09 -- setup/hugepages.sh@37 -- # local node hp 00:03:06.157 05:18:09 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:06.157 05:18:09 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:06.157 05:18:09 -- setup/hugepages.sh@41 -- # echo 0 00:03:06.157 05:18:09 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:06.157 05:18:09 -- setup/hugepages.sh@41 -- # echo 0 00:03:06.157 05:18:09 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:06.157 05:18:09 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:06.157 05:18:09 -- setup/hugepages.sh@41 -- # echo 0 00:03:06.157 05:18:09 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:06.157 05:18:09 -- setup/hugepages.sh@41 -- # echo 0 00:03:06.157 05:18:09 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:06.157 05:18:09 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:06.157 05:18:09 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:06.157 05:18:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:06.157 05:18:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:06.157 05:18:09 -- common/autotest_common.sh@10 -- # set +x 00:03:06.158 ************************************ 00:03:06.158 START TEST default_setup 00:03:06.158 ************************************ 00:03:06.158 05:18:09 -- common/autotest_common.sh@1114 -- # default_setup 00:03:06.158 05:18:09 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:06.158 05:18:09 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:06.158 05:18:09 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:06.158 05:18:09 -- setup/hugepages.sh@51 -- # shift 00:03:06.158 05:18:09 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:06.158 05:18:09 -- setup/hugepages.sh@52 -- # local node_ids 00:03:06.158 05:18:09 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:06.158 05:18:09 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:06.158 05:18:09 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:06.158 05:18:09 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:06.158 05:18:09 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:06.158 05:18:09 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:06.158 05:18:09 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:06.158 05:18:09 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:06.158 05:18:09 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:06.158 05:18:09 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:06.158 05:18:09 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:06.158 05:18:09 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:06.158 05:18:09 -- setup/hugepages.sh@73 -- # return 0 00:03:06.158 05:18:09 -- setup/hugepages.sh@137 -- # setup output 00:03:06.158 05:18:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.158 05:18:09 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:10.384 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:10.384 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:10.384 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:10.384 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:10.384 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:10.384 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:10.384 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:10.384 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:10.384 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:10.384 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:10.384 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:10.384 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:10.384 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:10.384 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:10.384 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:10.384 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:10.384 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:10.384 05:18:13 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:10.384 05:18:13 -- setup/hugepages.sh@89 -- # local node 00:03:10.384 05:18:13 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:10.384 05:18:13 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:10.384 05:18:13 -- setup/hugepages.sh@92 -- # local surp 00:03:10.384 05:18:13 -- setup/hugepages.sh@93 -- # local resv 00:03:10.384 05:18:13 -- setup/hugepages.sh@94 -- # local anon 00:03:10.384 05:18:13 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:10.384 05:18:13 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:10.384 05:18:13 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:10.384 05:18:13 -- setup/common.sh@18 -- # local node= 00:03:10.385 05:18:13 -- setup/common.sh@19 -- # local var val 00:03:10.385 05:18:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.385 05:18:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.385 05:18:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.385 05:18:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.385 05:18:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.385 05:18:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.385 05:18:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324764 kB' 'MemFree: 109597104 kB' 'MemAvailable: 113064420 kB' 'Buffers: 5168 kB' 'Cached: 9817824 kB' 'SwapCached: 0 kB' 'Active: 6598684 kB' 'Inactive: 3765728 kB' 'Active(anon): 6203340 kB' 'Inactive(anon): 0 kB' 'Active(file): 395344 kB' 'Inactive(file): 3765728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544800 kB' 'Mapped: 181980 kB' 'Shmem: 5661920 kB' 'KReclaimable: 261004 kB' 'Slab: 1348216 kB' 'SReclaimable: 261004 kB' 'SUnreclaim: 1087212 kB' 'KernelStack: 27200 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502408 kB' 'Committed_AS: 7470696 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235536 kB' 'VmallocChunk: 0 kB' 'Percpu: 107712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3108212 kB' 'DirectMap2M: 13348864 kB' 'DirectMap1G: 120586240 kB' 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.385 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.385 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.386 05:18:13 -- setup/common.sh@33 -- # echo 0 00:03:10.386 05:18:13 -- setup/common.sh@33 -- # return 0 00:03:10.386 05:18:13 -- setup/hugepages.sh@97 -- # anon=0 00:03:10.386 05:18:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:10.386 05:18:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.386 05:18:13 -- setup/common.sh@18 -- # local node= 00:03:10.386 05:18:13 -- setup/common.sh@19 -- # local var val 00:03:10.386 05:18:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.386 05:18:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.386 05:18:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.386 05:18:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.386 05:18:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.386 05:18:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.386 05:18:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324764 kB' 'MemFree: 109599124 kB' 'MemAvailable: 113066440 kB' 'Buffers: 5168 kB' 'Cached: 9817832 kB' 'SwapCached: 0 kB' 'Active: 6598844 kB' 'Inactive: 3765728 kB' 'Active(anon): 6203500 kB' 'Inactive(anon): 0 kB' 'Active(file): 395344 kB' 'Inactive(file): 3765728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545416 kB' 'Mapped: 181964 kB' 'Shmem: 5661928 kB' 'KReclaimable: 261004 kB' 'Slab: 1348408 kB' 'SReclaimable: 261004 kB' 'SUnreclaim: 1087404 kB' 'KernelStack: 27168 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502408 kB' 'Committed_AS: 7469076 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235488 kB' 'VmallocChunk: 0 kB' 'Percpu: 107712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3108212 kB' 'DirectMap2M: 13348864 kB' 'DirectMap1G: 120586240 kB' 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.386 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.386 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.387 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.387 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.388 05:18:13 -- setup/common.sh@33 -- # echo 0 00:03:10.388 05:18:13 -- setup/common.sh@33 -- # return 0 00:03:10.388 05:18:13 -- setup/hugepages.sh@99 -- # surp=0 00:03:10.388 05:18:13 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:10.388 05:18:13 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:10.388 05:18:13 -- setup/common.sh@18 -- # local node= 00:03:10.388 05:18:13 -- setup/common.sh@19 -- # local var val 00:03:10.388 05:18:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.388 05:18:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.388 05:18:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.388 05:18:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.388 05:18:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.388 05:18:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.388 05:18:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324764 kB' 'MemFree: 109599248 kB' 'MemAvailable: 113066564 kB' 'Buffers: 5168 kB' 'Cached: 9817848 kB' 'SwapCached: 0 kB' 'Active: 6598160 kB' 'Inactive: 3765728 kB' 'Active(anon): 6202816 kB' 'Inactive(anon): 0 kB' 'Active(file): 395344 kB' 'Inactive(file): 3765728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544268 kB' 'Mapped: 181964 kB' 'Shmem: 5661944 kB' 'KReclaimable: 261004 kB' 'Slab: 1348452 kB' 'SReclaimable: 261004 kB' 'SUnreclaim: 1087448 kB' 'KernelStack: 27120 kB' 'PageTables: 7872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502408 kB' 'Committed_AS: 7470736 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235536 kB' 'VmallocChunk: 0 kB' 'Percpu: 107712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3108212 kB' 'DirectMap2M: 13348864 kB' 'DirectMap1G: 120586240 kB' 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.388 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.388 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.389 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.389 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.390 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.390 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.390 05:18:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.390 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.390 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.390 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.390 05:18:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.390 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.390 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.390 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.390 05:18:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.390 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.390 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.390 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.390 05:18:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.390 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.390 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.390 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.390 05:18:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.390 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.390 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.390 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.390 05:18:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.390 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.390 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.390 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.390 05:18:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.390 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.390 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.390 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.390 05:18:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.390 05:18:13 -- setup/common.sh@33 -- # echo 0 00:03:10.390 05:18:13 -- setup/common.sh@33 -- # return 0 00:03:10.390 05:18:13 -- setup/hugepages.sh@100 -- # resv=0 00:03:10.390 05:18:13 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:10.390 nr_hugepages=1024 00:03:10.390 05:18:13 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:10.390 resv_hugepages=0 00:03:10.390 05:18:13 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:10.390 surplus_hugepages=0 00:03:10.390 05:18:13 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:10.390 anon_hugepages=0 00:03:10.390 05:18:13 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:10.390 05:18:13 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:10.390 05:18:13 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:10.390 05:18:13 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:10.390 05:18:13 -- setup/common.sh@18 -- # local node= 00:03:10.390 05:18:13 -- setup/common.sh@19 -- # local var val 00:03:10.390 05:18:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.390 05:18:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.390 05:18:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.390 05:18:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.390 05:18:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.390 05:18:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.390 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.390 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.390 05:18:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324764 kB' 'MemFree: 109598364 kB' 'MemAvailable: 113065680 kB' 'Buffers: 5168 kB' 'Cached: 9817864 kB' 'SwapCached: 0 kB' 'Active: 6598080 kB' 'Inactive: 3765728 kB' 'Active(anon): 6202736 kB' 'Inactive(anon): 0 kB' 'Active(file): 395344 kB' 'Inactive(file): 3765728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544032 kB' 'Mapped: 181960 kB' 'Shmem: 5661960 kB' 'KReclaimable: 261004 kB' 'Slab: 1348452 kB' 'SReclaimable: 261004 kB' 'SUnreclaim: 1087448 kB' 'KernelStack: 27120 kB' 'PageTables: 8236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502408 kB' 'Committed_AS: 7471288 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235584 kB' 'VmallocChunk: 0 kB' 'Percpu: 107712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3108212 kB' 'DirectMap2M: 13348864 kB' 'DirectMap1G: 120586240 kB' 00:03:10.390 05:18:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.390 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.390 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.390 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.390 05:18:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.390 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.390 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.390 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.390 05:18:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.390 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.390 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.390 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.390 05:18:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.390 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.390 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.390 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.390 05:18:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.390 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.390 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.390 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.390 05:18:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.390 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.390 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.390 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.391 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.391 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.392 05:18:13 -- setup/common.sh@33 -- # echo 1024 00:03:10.392 05:18:13 -- setup/common.sh@33 -- # return 0 00:03:10.392 05:18:13 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:10.392 05:18:13 -- setup/hugepages.sh@112 -- # get_nodes 00:03:10.392 05:18:13 -- setup/hugepages.sh@27 -- # local node 00:03:10.392 05:18:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.392 05:18:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:10.392 05:18:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.392 05:18:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:10.392 05:18:13 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:10.392 05:18:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:10.392 05:18:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:10.392 05:18:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:10.392 05:18:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:10.392 05:18:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.392 05:18:13 -- setup/common.sh@18 -- # local node=0 00:03:10.392 05:18:13 -- setup/common.sh@19 -- # local var val 00:03:10.392 05:18:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.392 05:18:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.392 05:18:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:10.392 05:18:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:10.392 05:18:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.392 05:18:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.392 05:18:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65652964 kB' 'MemFree: 59727272 kB' 'MemUsed: 5925692 kB' 'SwapCached: 0 kB' 'Active: 1807176 kB' 'Inactive: 175512 kB' 'Active(anon): 1614108 kB' 'Inactive(anon): 0 kB' 'Active(file): 193068 kB' 'Inactive(file): 175512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1898120 kB' 'Mapped: 68900 kB' 'AnonPages: 87772 kB' 'Shmem: 1529540 kB' 'KernelStack: 12120 kB' 'PageTables: 3216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 162788 kB' 'Slab: 705628 kB' 'SReclaimable: 162788 kB' 'SUnreclaim: 542840 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.392 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.392 05:18:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # continue 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.393 05:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.393 05:18:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.393 05:18:13 -- setup/common.sh@33 -- # echo 0 00:03:10.393 05:18:13 -- setup/common.sh@33 -- # return 0 00:03:10.393 05:18:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:10.393 05:18:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:10.393 05:18:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:10.394 05:18:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:10.394 05:18:13 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:10.394 node0=1024 expecting 1024 00:03:10.394 05:18:13 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:10.394 00:03:10.394 real 0m4.380s 00:03:10.394 user 0m1.662s 00:03:10.394 sys 0m2.737s 00:03:10.394 05:18:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:10.394 05:18:13 -- common/autotest_common.sh@10 -- # set +x 00:03:10.394 ************************************ 00:03:10.394 END TEST default_setup 00:03:10.394 ************************************ 00:03:10.394 05:18:13 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:10.394 05:18:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:10.394 05:18:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:10.394 05:18:13 -- common/autotest_common.sh@10 -- # set +x 00:03:10.394 ************************************ 00:03:10.394 START TEST per_node_1G_alloc 00:03:10.394 ************************************ 00:03:10.394 05:18:13 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:03:10.394 05:18:13 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:10.394 05:18:13 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:10.394 05:18:13 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:10.394 05:18:13 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:10.394 05:18:13 -- setup/hugepages.sh@51 -- # shift 00:03:10.394 05:18:13 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:10.394 05:18:13 -- setup/hugepages.sh@52 -- # local node_ids 00:03:10.394 05:18:13 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:10.394 05:18:13 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:10.394 05:18:13 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:10.394 05:18:13 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:10.394 05:18:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:10.394 05:18:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:10.394 05:18:13 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:10.394 05:18:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:10.394 05:18:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:10.394 05:18:13 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:10.394 05:18:13 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:10.394 05:18:13 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:10.394 05:18:13 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:10.394 05:18:13 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:10.394 05:18:13 -- setup/hugepages.sh@73 -- # return 0 00:03:10.394 05:18:13 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:10.394 05:18:13 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:10.394 05:18:13 -- setup/hugepages.sh@146 -- # setup output 00:03:10.394 05:18:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.394 05:18:13 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:14.605 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:14.605 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:14.605 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:14.605 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:14.605 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:14.605 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:14.605 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:14.605 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:14.605 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:14.605 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:14.605 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:14.605 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:14.605 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:14.605 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:14.605 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:14.605 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:14.605 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:14.605 05:18:17 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:14.605 05:18:17 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:14.605 05:18:17 -- setup/hugepages.sh@89 -- # local node 00:03:14.605 05:18:17 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:14.605 05:18:17 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:14.605 05:18:17 -- setup/hugepages.sh@92 -- # local surp 00:03:14.605 05:18:17 -- setup/hugepages.sh@93 -- # local resv 00:03:14.605 05:18:17 -- setup/hugepages.sh@94 -- # local anon 00:03:14.605 05:18:17 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:14.605 05:18:17 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:14.605 05:18:17 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:14.605 05:18:17 -- setup/common.sh@18 -- # local node= 00:03:14.605 05:18:17 -- setup/common.sh@19 -- # local var val 00:03:14.605 05:18:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.605 05:18:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.605 05:18:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.605 05:18:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.605 05:18:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.605 05:18:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.605 05:18:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324764 kB' 'MemFree: 109627672 kB' 'MemAvailable: 113094980 kB' 'Buffers: 5168 kB' 'Cached: 9817980 kB' 'SwapCached: 0 kB' 'Active: 6599280 kB' 'Inactive: 3765728 kB' 'Active(anon): 6203936 kB' 'Inactive(anon): 0 kB' 'Active(file): 395344 kB' 'Inactive(file): 3765728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544740 kB' 'Mapped: 180896 kB' 'Shmem: 5662076 kB' 'KReclaimable: 260988 kB' 'Slab: 1348328 kB' 'SReclaimable: 260988 kB' 'SUnreclaim: 1087340 kB' 'KernelStack: 26992 kB' 'PageTables: 7820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502408 kB' 'Committed_AS: 7459772 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235520 kB' 'VmallocChunk: 0 kB' 'Percpu: 107712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3108212 kB' 'DirectMap2M: 13348864 kB' 'DirectMap1G: 120586240 kB' 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.605 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.605 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.606 05:18:17 -- setup/common.sh@33 -- # echo 0 00:03:14.606 05:18:17 -- setup/common.sh@33 -- # return 0 00:03:14.606 05:18:17 -- setup/hugepages.sh@97 -- # anon=0 00:03:14.606 05:18:17 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:14.606 05:18:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.606 05:18:17 -- setup/common.sh@18 -- # local node= 00:03:14.606 05:18:17 -- setup/common.sh@19 -- # local var val 00:03:14.606 05:18:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.606 05:18:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.606 05:18:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.606 05:18:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.606 05:18:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.606 05:18:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.606 05:18:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324764 kB' 'MemFree: 109630628 kB' 'MemAvailable: 113097936 kB' 'Buffers: 5168 kB' 'Cached: 9817980 kB' 'SwapCached: 0 kB' 'Active: 6598712 kB' 'Inactive: 3765728 kB' 'Active(anon): 6203368 kB' 'Inactive(anon): 0 kB' 'Active(file): 395344 kB' 'Inactive(file): 3765728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544648 kB' 'Mapped: 180880 kB' 'Shmem: 5662076 kB' 'KReclaimable: 260988 kB' 'Slab: 1348312 kB' 'SReclaimable: 260988 kB' 'SUnreclaim: 1087324 kB' 'KernelStack: 27008 kB' 'PageTables: 7824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502408 kB' 'Committed_AS: 7459784 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235488 kB' 'VmallocChunk: 0 kB' 'Percpu: 107712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3108212 kB' 'DirectMap2M: 13348864 kB' 'DirectMap1G: 120586240 kB' 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.606 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.606 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.607 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.607 05:18:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.607 05:18:17 -- setup/common.sh@33 -- # echo 0 00:03:14.607 05:18:17 -- setup/common.sh@33 -- # return 0 00:03:14.607 05:18:17 -- setup/hugepages.sh@99 -- # surp=0 00:03:14.607 05:18:17 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:14.607 05:18:17 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:14.607 05:18:17 -- setup/common.sh@18 -- # local node= 00:03:14.607 05:18:17 -- setup/common.sh@19 -- # local var val 00:03:14.607 05:18:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.608 05:18:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.608 05:18:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.608 05:18:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.608 05:18:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.608 05:18:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.608 05:18:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324764 kB' 'MemFree: 109631100 kB' 'MemAvailable: 113098408 kB' 'Buffers: 5168 kB' 'Cached: 9817992 kB' 'SwapCached: 0 kB' 'Active: 6598620 kB' 'Inactive: 3765728 kB' 'Active(anon): 6203276 kB' 'Inactive(anon): 0 kB' 'Active(file): 395344 kB' 'Inactive(file): 3765728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544520 kB' 'Mapped: 180880 kB' 'Shmem: 5662088 kB' 'KReclaimable: 260988 kB' 'Slab: 1348300 kB' 'SReclaimable: 260988 kB' 'SUnreclaim: 1087312 kB' 'KernelStack: 26992 kB' 'PageTables: 7804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502408 kB' 'Committed_AS: 7459796 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235488 kB' 'VmallocChunk: 0 kB' 'Percpu: 107712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3108212 kB' 'DirectMap2M: 13348864 kB' 'DirectMap1G: 120586240 kB' 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.608 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.608 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.609 05:18:17 -- setup/common.sh@33 -- # echo 0 00:03:14.609 05:18:17 -- setup/common.sh@33 -- # return 0 00:03:14.609 05:18:17 -- setup/hugepages.sh@100 -- # resv=0 00:03:14.609 05:18:17 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:14.609 nr_hugepages=1024 00:03:14.609 05:18:17 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:14.609 resv_hugepages=0 00:03:14.609 05:18:17 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:14.609 surplus_hugepages=0 00:03:14.609 05:18:17 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:14.609 anon_hugepages=0 00:03:14.609 05:18:17 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:14.609 05:18:17 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:14.609 05:18:17 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:14.609 05:18:17 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:14.609 05:18:17 -- setup/common.sh@18 -- # local node= 00:03:14.609 05:18:17 -- setup/common.sh@19 -- # local var val 00:03:14.609 05:18:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.609 05:18:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.609 05:18:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.609 05:18:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.609 05:18:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.609 05:18:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.609 05:18:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324764 kB' 'MemFree: 109632560 kB' 'MemAvailable: 113099868 kB' 'Buffers: 5168 kB' 'Cached: 9818008 kB' 'SwapCached: 0 kB' 'Active: 6598928 kB' 'Inactive: 3765728 kB' 'Active(anon): 6203584 kB' 'Inactive(anon): 0 kB' 'Active(file): 395344 kB' 'Inactive(file): 3765728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544952 kB' 'Mapped: 180880 kB' 'Shmem: 5662104 kB' 'KReclaimable: 260988 kB' 'Slab: 1348300 kB' 'SReclaimable: 260988 kB' 'SUnreclaim: 1087312 kB' 'KernelStack: 27040 kB' 'PageTables: 7948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502408 kB' 'Committed_AS: 7459812 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235456 kB' 'VmallocChunk: 0 kB' 'Percpu: 107712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3108212 kB' 'DirectMap2M: 13348864 kB' 'DirectMap1G: 120586240 kB' 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.609 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.609 05:18:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.610 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.610 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.611 05:18:17 -- setup/common.sh@33 -- # echo 1024 00:03:14.611 05:18:17 -- setup/common.sh@33 -- # return 0 00:03:14.611 05:18:17 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:14.611 05:18:17 -- setup/hugepages.sh@112 -- # get_nodes 00:03:14.611 05:18:17 -- setup/hugepages.sh@27 -- # local node 00:03:14.611 05:18:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.611 05:18:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:14.611 05:18:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.611 05:18:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:14.611 05:18:17 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:14.611 05:18:17 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:14.611 05:18:17 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:14.611 05:18:17 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:14.611 05:18:17 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:14.611 05:18:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.611 05:18:17 -- setup/common.sh@18 -- # local node=0 00:03:14.611 05:18:17 -- setup/common.sh@19 -- # local var val 00:03:14.611 05:18:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.611 05:18:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.611 05:18:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:14.611 05:18:17 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:14.611 05:18:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.611 05:18:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.611 05:18:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65652964 kB' 'MemFree: 60798268 kB' 'MemUsed: 4854696 kB' 'SwapCached: 0 kB' 'Active: 1807808 kB' 'Inactive: 175512 kB' 'Active(anon): 1614740 kB' 'Inactive(anon): 0 kB' 'Active(file): 193068 kB' 'Inactive(file): 175512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1898224 kB' 'Mapped: 68328 kB' 'AnonPages: 88292 kB' 'Shmem: 1529644 kB' 'KernelStack: 12104 kB' 'PageTables: 3368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 162788 kB' 'Slab: 705652 kB' 'SReclaimable: 162788 kB' 'SUnreclaim: 542864 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.611 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.611 05:18:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.612 05:18:17 -- setup/common.sh@33 -- # echo 0 00:03:14.612 05:18:17 -- setup/common.sh@33 -- # return 0 00:03:14.612 05:18:17 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:14.612 05:18:17 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:14.612 05:18:17 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:14.612 05:18:17 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:14.612 05:18:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.612 05:18:17 -- setup/common.sh@18 -- # local node=1 00:03:14.612 05:18:17 -- setup/common.sh@19 -- # local var val 00:03:14.612 05:18:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.612 05:18:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.612 05:18:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:14.612 05:18:17 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:14.612 05:18:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.612 05:18:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.612 05:18:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60671800 kB' 'MemFree: 48834208 kB' 'MemUsed: 11837592 kB' 'SwapCached: 0 kB' 'Active: 4790468 kB' 'Inactive: 3590216 kB' 'Active(anon): 4588192 kB' 'Inactive(anon): 0 kB' 'Active(file): 202276 kB' 'Inactive(file): 3590216 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7924976 kB' 'Mapped: 112552 kB' 'AnonPages: 455832 kB' 'Shmem: 4132484 kB' 'KernelStack: 14872 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98200 kB' 'Slab: 642648 kB' 'SReclaimable: 98200 kB' 'SUnreclaim: 544448 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.612 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.612 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.613 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.613 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.613 05:18:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.613 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.613 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.613 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.613 05:18:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.613 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.613 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.613 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.613 05:18:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.613 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.613 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.613 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.613 05:18:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.613 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.613 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.613 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.613 05:18:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.613 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.613 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.613 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.613 05:18:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.613 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.613 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.613 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.613 05:18:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.613 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.613 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.613 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.613 05:18:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.613 05:18:17 -- setup/common.sh@32 -- # continue 00:03:14.613 05:18:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.613 05:18:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.613 05:18:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.613 05:18:17 -- setup/common.sh@33 -- # echo 0 00:03:14.613 05:18:17 -- setup/common.sh@33 -- # return 0 00:03:14.613 05:18:17 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:14.613 05:18:17 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:14.613 05:18:17 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:14.613 05:18:17 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:14.613 05:18:17 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:14.613 node0=512 expecting 512 00:03:14.613 05:18:17 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:14.613 05:18:17 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:14.613 05:18:17 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:14.613 05:18:17 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:14.613 node1=512 expecting 512 00:03:14.613 05:18:17 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:14.613 00:03:14.613 real 0m4.140s 00:03:14.613 user 0m1.597s 00:03:14.613 sys 0m2.614s 00:03:14.613 05:18:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:14.613 05:18:17 -- common/autotest_common.sh@10 -- # set +x 00:03:14.613 ************************************ 00:03:14.613 END TEST per_node_1G_alloc 00:03:14.613 ************************************ 00:03:14.613 05:18:17 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:14.613 05:18:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:14.613 05:18:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:14.613 05:18:17 -- common/autotest_common.sh@10 -- # set +x 00:03:14.613 ************************************ 00:03:14.613 START TEST even_2G_alloc 00:03:14.613 ************************************ 00:03:14.613 05:18:17 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:03:14.613 05:18:17 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:14.613 05:18:17 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:14.613 05:18:17 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:14.613 05:18:17 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:14.613 05:18:17 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:14.613 05:18:17 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:14.613 05:18:17 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:14.613 05:18:17 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:14.613 05:18:17 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:14.613 05:18:17 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:14.613 05:18:17 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:14.613 05:18:17 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:14.613 05:18:17 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:14.613 05:18:17 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:14.613 05:18:17 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:14.613 05:18:17 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:14.613 05:18:17 -- setup/hugepages.sh@83 -- # : 512 00:03:14.613 05:18:17 -- setup/hugepages.sh@84 -- # : 1 00:03:14.613 05:18:17 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:14.613 05:18:17 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:14.613 05:18:17 -- setup/hugepages.sh@83 -- # : 0 00:03:14.613 05:18:17 -- setup/hugepages.sh@84 -- # : 0 00:03:14.613 05:18:17 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:14.613 05:18:17 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:14.613 05:18:17 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:14.613 05:18:17 -- setup/hugepages.sh@153 -- # setup output 00:03:14.613 05:18:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.613 05:18:17 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:18.831 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:18.831 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:18.831 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:18.831 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:18.831 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:18.831 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:18.831 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:18.831 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:18.831 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:18.831 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:18.831 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:18.831 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:18.831 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:18.831 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:18.831 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:18.831 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:18.831 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:18.831 05:18:21 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:18.831 05:18:21 -- setup/hugepages.sh@89 -- # local node 00:03:18.831 05:18:21 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:18.832 05:18:21 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:18.832 05:18:21 -- setup/hugepages.sh@92 -- # local surp 00:03:18.832 05:18:21 -- setup/hugepages.sh@93 -- # local resv 00:03:18.832 05:18:21 -- setup/hugepages.sh@94 -- # local anon 00:03:18.832 05:18:21 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:18.832 05:18:21 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:18.832 05:18:21 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:18.832 05:18:21 -- setup/common.sh@18 -- # local node= 00:03:18.832 05:18:21 -- setup/common.sh@19 -- # local var val 00:03:18.832 05:18:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.832 05:18:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.832 05:18:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.832 05:18:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.832 05:18:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.832 05:18:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.832 05:18:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324764 kB' 'MemFree: 109673332 kB' 'MemAvailable: 113140628 kB' 'Buffers: 5168 kB' 'Cached: 9818124 kB' 'SwapCached: 0 kB' 'Active: 6600292 kB' 'Inactive: 3765728 kB' 'Active(anon): 6204948 kB' 'Inactive(anon): 0 kB' 'Active(file): 395344 kB' 'Inactive(file): 3765728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546544 kB' 'Mapped: 180908 kB' 'Shmem: 5662220 kB' 'KReclaimable: 260964 kB' 'Slab: 1348612 kB' 'SReclaimable: 260964 kB' 'SUnreclaim: 1087648 kB' 'KernelStack: 27056 kB' 'PageTables: 8208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502408 kB' 'Committed_AS: 7465624 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235600 kB' 'VmallocChunk: 0 kB' 'Percpu: 107712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3108212 kB' 'DirectMap2M: 13348864 kB' 'DirectMap1G: 120586240 kB' 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.832 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.832 05:18:21 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.833 05:18:21 -- setup/common.sh@33 -- # echo 0 00:03:18.833 05:18:21 -- setup/common.sh@33 -- # return 0 00:03:18.833 05:18:21 -- setup/hugepages.sh@97 -- # anon=0 00:03:18.833 05:18:21 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:18.833 05:18:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.833 05:18:21 -- setup/common.sh@18 -- # local node= 00:03:18.833 05:18:21 -- setup/common.sh@19 -- # local var val 00:03:18.833 05:18:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.833 05:18:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.833 05:18:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.833 05:18:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.833 05:18:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.833 05:18:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 05:18:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324764 kB' 'MemFree: 109677964 kB' 'MemAvailable: 113145256 kB' 'Buffers: 5168 kB' 'Cached: 9818128 kB' 'SwapCached: 0 kB' 'Active: 6600536 kB' 'Inactive: 3765728 kB' 'Active(anon): 6205192 kB' 'Inactive(anon): 0 kB' 'Active(file): 395344 kB' 'Inactive(file): 3765728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546492 kB' 'Mapped: 180892 kB' 'Shmem: 5662224 kB' 'KReclaimable: 260956 kB' 'Slab: 1348684 kB' 'SReclaimable: 260956 kB' 'SUnreclaim: 1087728 kB' 'KernelStack: 27200 kB' 'PageTables: 8284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502408 kB' 'Committed_AS: 7463988 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235568 kB' 'VmallocChunk: 0 kB' 'Percpu: 107712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3108212 kB' 'DirectMap2M: 13348864 kB' 'DirectMap1G: 120586240 kB' 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.833 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.833 05:18:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.834 05:18:21 -- setup/common.sh@33 -- # echo 0 00:03:18.834 05:18:21 -- setup/common.sh@33 -- # return 0 00:03:18.834 05:18:21 -- setup/hugepages.sh@99 -- # surp=0 00:03:18.834 05:18:21 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:18.834 05:18:21 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:18.834 05:18:21 -- setup/common.sh@18 -- # local node= 00:03:18.834 05:18:21 -- setup/common.sh@19 -- # local var val 00:03:18.834 05:18:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.834 05:18:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.834 05:18:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.834 05:18:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.834 05:18:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.834 05:18:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 05:18:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324764 kB' 'MemFree: 109674956 kB' 'MemAvailable: 113142248 kB' 'Buffers: 5168 kB' 'Cached: 9818140 kB' 'SwapCached: 0 kB' 'Active: 6599976 kB' 'Inactive: 3765728 kB' 'Active(anon): 6204632 kB' 'Inactive(anon): 0 kB' 'Active(file): 395344 kB' 'Inactive(file): 3765728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545744 kB' 'Mapped: 180892 kB' 'Shmem: 5662236 kB' 'KReclaimable: 260956 kB' 'Slab: 1348588 kB' 'SReclaimable: 260956 kB' 'SUnreclaim: 1087632 kB' 'KernelStack: 27168 kB' 'PageTables: 8088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502408 kB' 'Committed_AS: 7483824 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235600 kB' 'VmallocChunk: 0 kB' 'Percpu: 107712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3108212 kB' 'DirectMap2M: 13348864 kB' 'DirectMap1G: 120586240 kB' 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.834 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.834 05:18:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.835 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.835 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.836 05:18:21 -- setup/common.sh@33 -- # echo 0 00:03:18.836 05:18:21 -- setup/common.sh@33 -- # return 0 00:03:18.836 05:18:21 -- setup/hugepages.sh@100 -- # resv=0 00:03:18.836 05:18:21 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:18.836 nr_hugepages=1024 00:03:18.836 05:18:21 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:18.836 resv_hugepages=0 00:03:18.836 05:18:21 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:18.836 surplus_hugepages=0 00:03:18.836 05:18:21 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:18.836 anon_hugepages=0 00:03:18.836 05:18:21 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.836 05:18:21 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:18.836 05:18:21 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:18.836 05:18:21 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:18.836 05:18:21 -- setup/common.sh@18 -- # local node= 00:03:18.836 05:18:21 -- setup/common.sh@19 -- # local var val 00:03:18.836 05:18:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.836 05:18:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.836 05:18:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.836 05:18:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.836 05:18:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.836 05:18:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.836 05:18:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324764 kB' 'MemFree: 109673236 kB' 'MemAvailable: 113140528 kB' 'Buffers: 5168 kB' 'Cached: 9818152 kB' 'SwapCached: 0 kB' 'Active: 6599936 kB' 'Inactive: 3765728 kB' 'Active(anon): 6204592 kB' 'Inactive(anon): 0 kB' 'Active(file): 395344 kB' 'Inactive(file): 3765728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545712 kB' 'Mapped: 180892 kB' 'Shmem: 5662248 kB' 'KReclaimable: 260956 kB' 'Slab: 1348588 kB' 'SReclaimable: 260956 kB' 'SUnreclaim: 1087632 kB' 'KernelStack: 27152 kB' 'PageTables: 7900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502408 kB' 'Committed_AS: 7465288 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235600 kB' 'VmallocChunk: 0 kB' 'Percpu: 107712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3108212 kB' 'DirectMap2M: 13348864 kB' 'DirectMap1G: 120586240 kB' 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.836 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.836 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.837 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.837 05:18:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.837 05:18:21 -- setup/common.sh@33 -- # echo 1024 00:03:18.837 05:18:21 -- setup/common.sh@33 -- # return 0 00:03:18.837 05:18:21 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.837 05:18:21 -- setup/hugepages.sh@112 -- # get_nodes 00:03:18.837 05:18:21 -- setup/hugepages.sh@27 -- # local node 00:03:18.837 05:18:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.837 05:18:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:18.837 05:18:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.837 05:18:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:18.837 05:18:21 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:18.837 05:18:21 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:18.837 05:18:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.837 05:18:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.837 05:18:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:18.838 05:18:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.838 05:18:21 -- setup/common.sh@18 -- # local node=0 00:03:18.838 05:18:21 -- setup/common.sh@19 -- # local var val 00:03:18.838 05:18:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.838 05:18:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.838 05:18:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:18.838 05:18:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:18.838 05:18:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.838 05:18:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.838 05:18:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65652964 kB' 'MemFree: 60832068 kB' 'MemUsed: 4820896 kB' 'SwapCached: 0 kB' 'Active: 1807868 kB' 'Inactive: 175512 kB' 'Active(anon): 1614800 kB' 'Inactive(anon): 0 kB' 'Active(file): 193068 kB' 'Inactive(file): 175512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1898316 kB' 'Mapped: 68340 kB' 'AnonPages: 88316 kB' 'Shmem: 1529736 kB' 'KernelStack: 12120 kB' 'PageTables: 3364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 162788 kB' 'Slab: 705760 kB' 'SReclaimable: 162788 kB' 'SUnreclaim: 542972 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.838 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.838 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.839 05:18:21 -- setup/common.sh@33 -- # echo 0 00:03:18.839 05:18:21 -- setup/common.sh@33 -- # return 0 00:03:18.839 05:18:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.839 05:18:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.839 05:18:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.839 05:18:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:18.839 05:18:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.839 05:18:21 -- setup/common.sh@18 -- # local node=1 00:03:18.839 05:18:21 -- setup/common.sh@19 -- # local var val 00:03:18.839 05:18:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.839 05:18:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.839 05:18:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:18.839 05:18:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:18.839 05:18:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.839 05:18:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.839 05:18:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60671800 kB' 'MemFree: 48839684 kB' 'MemUsed: 11832116 kB' 'SwapCached: 0 kB' 'Active: 4791304 kB' 'Inactive: 3590216 kB' 'Active(anon): 4589028 kB' 'Inactive(anon): 0 kB' 'Active(file): 202276 kB' 'Inactive(file): 3590216 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7925028 kB' 'Mapped: 112552 kB' 'AnonPages: 456596 kB' 'Shmem: 4132536 kB' 'KernelStack: 14792 kB' 'PageTables: 3884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98168 kB' 'Slab: 642828 kB' 'SReclaimable: 98168 kB' 'SUnreclaim: 544660 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.839 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.839 05:18:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.840 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.840 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.840 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.840 05:18:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.840 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.840 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.840 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.840 05:18:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.840 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.840 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.840 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.840 05:18:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.840 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.840 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.840 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.840 05:18:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.840 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.840 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.840 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.840 05:18:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.840 05:18:21 -- setup/common.sh@32 -- # continue 00:03:18.840 05:18:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.840 05:18:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.840 05:18:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.840 05:18:21 -- setup/common.sh@33 -- # echo 0 00:03:18.840 05:18:21 -- setup/common.sh@33 -- # return 0 00:03:18.840 05:18:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.840 05:18:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.840 05:18:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.840 05:18:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.840 05:18:21 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:18.840 node0=512 expecting 512 00:03:18.840 05:18:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.840 05:18:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.840 05:18:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.840 05:18:21 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:18.840 node1=512 expecting 512 00:03:18.840 05:18:21 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:18.840 00:03:18.840 real 0m4.144s 00:03:18.840 user 0m1.639s 00:03:18.840 sys 0m2.569s 00:03:18.840 05:18:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:18.840 05:18:21 -- common/autotest_common.sh@10 -- # set +x 00:03:18.840 ************************************ 00:03:18.840 END TEST even_2G_alloc 00:03:18.840 ************************************ 00:03:18.840 05:18:21 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:18.840 05:18:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:18.840 05:18:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:18.840 05:18:21 -- common/autotest_common.sh@10 -- # set +x 00:03:18.840 ************************************ 00:03:18.840 START TEST odd_alloc 00:03:18.840 ************************************ 00:03:18.840 05:18:21 -- common/autotest_common.sh@1114 -- # odd_alloc 00:03:18.840 05:18:21 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:18.840 05:18:21 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:18.840 05:18:21 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:18.840 05:18:21 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:18.840 05:18:21 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:18.840 05:18:21 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:18.840 05:18:21 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:18.840 05:18:21 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.840 05:18:21 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:18.840 05:18:21 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.840 05:18:21 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.840 05:18:21 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.840 05:18:21 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:18.840 05:18:21 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:18.840 05:18:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.840 05:18:21 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:18.840 05:18:21 -- setup/hugepages.sh@83 -- # : 513 00:03:18.840 05:18:21 -- setup/hugepages.sh@84 -- # : 1 00:03:18.840 05:18:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.840 05:18:21 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:18.840 05:18:21 -- setup/hugepages.sh@83 -- # : 0 00:03:18.840 05:18:21 -- setup/hugepages.sh@84 -- # : 0 00:03:18.840 05:18:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.840 05:18:21 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:18.840 05:18:21 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:18.840 05:18:21 -- setup/hugepages.sh@160 -- # setup output 00:03:18.840 05:18:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.840 05:18:21 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:23.052 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:23.052 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:23.052 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:23.052 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:23.052 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:23.052 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:23.052 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:23.052 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:23.052 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:23.052 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:23.052 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:23.052 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:23.052 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:23.052 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:23.052 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:23.052 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:23.052 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:23.052 05:18:25 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:23.052 05:18:25 -- setup/hugepages.sh@89 -- # local node 00:03:23.052 05:18:25 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:23.052 05:18:25 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:23.052 05:18:25 -- setup/hugepages.sh@92 -- # local surp 00:03:23.052 05:18:25 -- setup/hugepages.sh@93 -- # local resv 00:03:23.052 05:18:25 -- setup/hugepages.sh@94 -- # local anon 00:03:23.052 05:18:25 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:23.052 05:18:25 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:23.052 05:18:25 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:23.052 05:18:25 -- setup/common.sh@18 -- # local node= 00:03:23.052 05:18:25 -- setup/common.sh@19 -- # local var val 00:03:23.052 05:18:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.052 05:18:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.052 05:18:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.052 05:18:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.052 05:18:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.052 05:18:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.052 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.052 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.052 05:18:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324764 kB' 'MemFree: 109706180 kB' 'MemAvailable: 113173472 kB' 'Buffers: 5168 kB' 'Cached: 9818288 kB' 'SwapCached: 0 kB' 'Active: 6601660 kB' 'Inactive: 3765728 kB' 'Active(anon): 6206316 kB' 'Inactive(anon): 0 kB' 'Active(file): 395344 kB' 'Inactive(file): 3765728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547336 kB' 'Mapped: 182020 kB' 'Shmem: 5662384 kB' 'KReclaimable: 260956 kB' 'Slab: 1348560 kB' 'SReclaimable: 260956 kB' 'SUnreclaim: 1087604 kB' 'KernelStack: 27088 kB' 'PageTables: 7948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70501384 kB' 'Committed_AS: 7496164 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235584 kB' 'VmallocChunk: 0 kB' 'Percpu: 107712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3108212 kB' 'DirectMap2M: 13348864 kB' 'DirectMap1G: 120586240 kB' 00:03:23.052 05:18:25 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.052 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.052 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.052 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.052 05:18:25 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.052 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.052 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.052 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.052 05:18:25 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.052 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.052 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.052 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.052 05:18:25 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.052 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.052 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.052 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.052 05:18:25 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.052 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.052 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.052 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.052 05:18:25 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.052 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.052 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.052 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.052 05:18:25 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.052 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.052 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.052 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.053 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.053 05:18:25 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.054 05:18:25 -- setup/common.sh@33 -- # echo 0 00:03:23.054 05:18:25 -- setup/common.sh@33 -- # return 0 00:03:23.054 05:18:25 -- setup/hugepages.sh@97 -- # anon=0 00:03:23.054 05:18:25 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:23.054 05:18:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.054 05:18:25 -- setup/common.sh@18 -- # local node= 00:03:23.054 05:18:25 -- setup/common.sh@19 -- # local var val 00:03:23.054 05:18:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.054 05:18:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.054 05:18:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.054 05:18:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.054 05:18:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.054 05:18:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.054 05:18:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324764 kB' 'MemFree: 109710204 kB' 'MemAvailable: 113177496 kB' 'Buffers: 5168 kB' 'Cached: 9818292 kB' 'SwapCached: 0 kB' 'Active: 6601804 kB' 'Inactive: 3765728 kB' 'Active(anon): 6206460 kB' 'Inactive(anon): 0 kB' 'Active(file): 395344 kB' 'Inactive(file): 3765728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547532 kB' 'Mapped: 182020 kB' 'Shmem: 5662388 kB' 'KReclaimable: 260956 kB' 'Slab: 1348596 kB' 'SReclaimable: 260956 kB' 'SUnreclaim: 1087640 kB' 'KernelStack: 27056 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70501384 kB' 'Committed_AS: 7496176 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235504 kB' 'VmallocChunk: 0 kB' 'Percpu: 107712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3108212 kB' 'DirectMap2M: 13348864 kB' 'DirectMap1G: 120586240 kB' 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.054 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.054 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.055 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.055 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.056 05:18:25 -- setup/common.sh@33 -- # echo 0 00:03:23.056 05:18:25 -- setup/common.sh@33 -- # return 0 00:03:23.056 05:18:25 -- setup/hugepages.sh@99 -- # surp=0 00:03:23.056 05:18:25 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:23.056 05:18:25 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:23.056 05:18:25 -- setup/common.sh@18 -- # local node= 00:03:23.056 05:18:25 -- setup/common.sh@19 -- # local var val 00:03:23.056 05:18:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.056 05:18:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.056 05:18:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.056 05:18:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.056 05:18:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.056 05:18:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.056 05:18:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324764 kB' 'MemFree: 109710204 kB' 'MemAvailable: 113177496 kB' 'Buffers: 5168 kB' 'Cached: 9818304 kB' 'SwapCached: 0 kB' 'Active: 6602268 kB' 'Inactive: 3765728 kB' 'Active(anon): 6206924 kB' 'Inactive(anon): 0 kB' 'Active(file): 395344 kB' 'Inactive(file): 3765728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547944 kB' 'Mapped: 182524 kB' 'Shmem: 5662400 kB' 'KReclaimable: 260956 kB' 'Slab: 1348628 kB' 'SReclaimable: 260956 kB' 'SUnreclaim: 1087672 kB' 'KernelStack: 27088 kB' 'PageTables: 7924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70501384 kB' 'Committed_AS: 7497548 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235504 kB' 'VmallocChunk: 0 kB' 'Percpu: 107712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3108212 kB' 'DirectMap2M: 13348864 kB' 'DirectMap1G: 120586240 kB' 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.056 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.056 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.057 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.057 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.058 05:18:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.058 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.058 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.058 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.058 05:18:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.058 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.058 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.058 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.058 05:18:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.058 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.058 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.058 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.058 05:18:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.058 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.058 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.058 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.058 05:18:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.058 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.058 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.058 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.058 05:18:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.058 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.058 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.058 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.058 05:18:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.058 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.058 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.058 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.058 05:18:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.058 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.058 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.058 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.058 05:18:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.058 05:18:25 -- setup/common.sh@33 -- # echo 0 00:03:23.058 05:18:25 -- setup/common.sh@33 -- # return 0 00:03:23.058 05:18:25 -- setup/hugepages.sh@100 -- # resv=0 00:03:23.058 05:18:25 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:23.058 nr_hugepages=1025 00:03:23.058 05:18:25 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:23.058 resv_hugepages=0 00:03:23.058 05:18:25 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:23.058 surplus_hugepages=0 00:03:23.058 05:18:25 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:23.058 anon_hugepages=0 00:03:23.058 05:18:25 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:23.058 05:18:25 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:23.058 05:18:25 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:23.058 05:18:25 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:23.058 05:18:25 -- setup/common.sh@18 -- # local node= 00:03:23.058 05:18:25 -- setup/common.sh@19 -- # local var val 00:03:23.058 05:18:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.058 05:18:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.058 05:18:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.058 05:18:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.058 05:18:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.058 05:18:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.058 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.058 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.058 05:18:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324764 kB' 'MemFree: 109703400 kB' 'MemAvailable: 113170692 kB' 'Buffers: 5168 kB' 'Cached: 9818316 kB' 'SwapCached: 0 kB' 'Active: 6606832 kB' 'Inactive: 3765728 kB' 'Active(anon): 6211488 kB' 'Inactive(anon): 0 kB' 'Active(file): 395344 kB' 'Inactive(file): 3765728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552980 kB' 'Mapped: 182524 kB' 'Shmem: 5662412 kB' 'KReclaimable: 260956 kB' 'Slab: 1348628 kB' 'SReclaimable: 260956 kB' 'SUnreclaim: 1087672 kB' 'KernelStack: 27072 kB' 'PageTables: 7880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70501384 kB' 'Committed_AS: 7502328 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235492 kB' 'VmallocChunk: 0 kB' 'Percpu: 107712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3108212 kB' 'DirectMap2M: 13348864 kB' 'DirectMap1G: 120586240 kB' 00:03:23.058 05:18:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.058 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.058 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.058 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.058 05:18:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.058 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.058 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.058 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.058 05:18:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.058 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.058 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.058 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.058 05:18:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.058 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.058 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.058 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.058 05:18:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.058 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.058 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.058 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.058 05:18:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.059 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.059 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.060 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.060 05:18:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.060 05:18:25 -- setup/common.sh@33 -- # echo 1025 00:03:23.060 05:18:25 -- setup/common.sh@33 -- # return 0 00:03:23.060 05:18:25 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:23.060 05:18:25 -- setup/hugepages.sh@112 -- # get_nodes 00:03:23.060 05:18:25 -- setup/hugepages.sh@27 -- # local node 00:03:23.060 05:18:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.060 05:18:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:23.060 05:18:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.060 05:18:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:23.060 05:18:25 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:23.060 05:18:25 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:23.060 05:18:25 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.060 05:18:25 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.060 05:18:25 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:23.060 05:18:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.060 05:18:25 -- setup/common.sh@18 -- # local node=0 00:03:23.060 05:18:25 -- setup/common.sh@19 -- # local var val 00:03:23.060 05:18:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.060 05:18:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.061 05:18:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:23.061 05:18:25 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:23.061 05:18:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.061 05:18:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.061 05:18:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65652964 kB' 'MemFree: 60833576 kB' 'MemUsed: 4819388 kB' 'SwapCached: 0 kB' 'Active: 1808556 kB' 'Inactive: 175512 kB' 'Active(anon): 1615488 kB' 'Inactive(anon): 0 kB' 'Active(file): 193068 kB' 'Inactive(file): 175512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1898400 kB' 'Mapped: 68644 kB' 'AnonPages: 88952 kB' 'Shmem: 1529820 kB' 'KernelStack: 12136 kB' 'PageTables: 3444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 162788 kB' 'Slab: 705716 kB' 'SReclaimable: 162788 kB' 'SUnreclaim: 542928 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.061 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.061 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.062 05:18:25 -- setup/common.sh@33 -- # echo 0 00:03:23.062 05:18:25 -- setup/common.sh@33 -- # return 0 00:03:23.062 05:18:25 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.062 05:18:25 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.062 05:18:25 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.062 05:18:25 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:23.062 05:18:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.062 05:18:25 -- setup/common.sh@18 -- # local node=1 00:03:23.062 05:18:25 -- setup/common.sh@19 -- # local var val 00:03:23.062 05:18:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.062 05:18:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.062 05:18:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:23.062 05:18:25 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:23.062 05:18:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.062 05:18:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.062 05:18:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60671800 kB' 'MemFree: 48875068 kB' 'MemUsed: 11796732 kB' 'SwapCached: 0 kB' 'Active: 4793304 kB' 'Inactive: 3590216 kB' 'Active(anon): 4591028 kB' 'Inactive(anon): 0 kB' 'Active(file): 202276 kB' 'Inactive(file): 3590216 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7925100 kB' 'Mapped: 113636 kB' 'AnonPages: 458580 kB' 'Shmem: 4132608 kB' 'KernelStack: 14952 kB' 'PageTables: 4504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98168 kB' 'Slab: 642912 kB' 'SReclaimable: 98168 kB' 'SUnreclaim: 544744 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.062 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.062 05:18:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.063 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.063 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.063 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.063 05:18:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.063 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.063 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.063 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.063 05:18:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.063 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.063 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.063 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.063 05:18:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.063 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.063 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.063 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.063 05:18:25 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.063 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.063 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.063 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.063 05:18:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.063 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.063 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.063 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.063 05:18:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.063 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.063 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.063 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.063 05:18:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.063 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.063 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.063 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.063 05:18:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.063 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.063 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.063 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.063 05:18:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.063 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.063 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.063 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.063 05:18:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.063 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.063 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.063 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.063 05:18:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.063 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.063 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.063 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.063 05:18:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.063 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.063 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.063 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.063 05:18:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.063 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.063 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.063 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.063 05:18:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.063 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.063 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.063 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.063 05:18:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.063 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.063 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.063 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.063 05:18:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.063 05:18:25 -- setup/common.sh@32 -- # continue 00:03:23.063 05:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.063 05:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.063 05:18:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.063 05:18:26 -- setup/common.sh@32 -- # continue 00:03:23.063 05:18:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.063 05:18:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.063 05:18:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.063 05:18:26 -- setup/common.sh@32 -- # continue 00:03:23.063 05:18:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.063 05:18:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.063 05:18:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.063 05:18:26 -- setup/common.sh@32 -- # continue 00:03:23.063 05:18:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.063 05:18:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.063 05:18:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.063 05:18:26 -- setup/common.sh@32 -- # continue 00:03:23.063 05:18:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.063 05:18:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.063 05:18:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.063 05:18:26 -- setup/common.sh@32 -- # continue 00:03:23.063 05:18:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.063 05:18:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.063 05:18:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.063 05:18:26 -- setup/common.sh@32 -- # continue 00:03:23.063 05:18:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.063 05:18:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.063 05:18:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.063 05:18:26 -- setup/common.sh@32 -- # continue 00:03:23.063 05:18:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.063 05:18:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.063 05:18:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.063 05:18:26 -- setup/common.sh@32 -- # continue 00:03:23.063 05:18:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.063 05:18:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.063 05:18:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.063 05:18:26 -- setup/common.sh@32 -- # continue 00:03:23.063 05:18:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.063 05:18:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.063 05:18:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.063 05:18:26 -- setup/common.sh@33 -- # echo 0 00:03:23.063 05:18:26 -- setup/common.sh@33 -- # return 0 00:03:23.063 05:18:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.063 05:18:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.063 05:18:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.063 05:18:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.063 05:18:26 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:23.063 node0=512 expecting 513 00:03:23.063 05:18:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.063 05:18:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.063 05:18:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.063 05:18:26 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:23.063 node1=513 expecting 512 00:03:23.063 05:18:26 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:23.063 00:03:23.063 real 0m4.096s 00:03:23.063 user 0m1.621s 00:03:23.063 sys 0m2.547s 00:03:23.063 05:18:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:23.063 05:18:26 -- common/autotest_common.sh@10 -- # set +x 00:03:23.063 ************************************ 00:03:23.063 END TEST odd_alloc 00:03:23.063 ************************************ 00:03:23.063 05:18:26 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:23.063 05:18:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:23.063 05:18:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:23.063 05:18:26 -- common/autotest_common.sh@10 -- # set +x 00:03:23.063 ************************************ 00:03:23.063 START TEST custom_alloc 00:03:23.063 ************************************ 00:03:23.063 05:18:26 -- common/autotest_common.sh@1114 -- # custom_alloc 00:03:23.063 05:18:26 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:23.063 05:18:26 -- setup/hugepages.sh@169 -- # local node 00:03:23.063 05:18:26 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:23.063 05:18:26 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:23.063 05:18:26 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:23.063 05:18:26 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:23.063 05:18:26 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:23.063 05:18:26 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:23.063 05:18:26 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:23.063 05:18:26 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:23.063 05:18:26 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:23.063 05:18:26 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:23.063 05:18:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:23.063 05:18:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:23.063 05:18:26 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:23.063 05:18:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:23.063 05:18:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:23.063 05:18:26 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:23.063 05:18:26 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:23.063 05:18:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:23.063 05:18:26 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:23.063 05:18:26 -- setup/hugepages.sh@83 -- # : 256 00:03:23.063 05:18:26 -- setup/hugepages.sh@84 -- # : 1 00:03:23.064 05:18:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:23.064 05:18:26 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:23.064 05:18:26 -- setup/hugepages.sh@83 -- # : 0 00:03:23.064 05:18:26 -- setup/hugepages.sh@84 -- # : 0 00:03:23.064 05:18:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:23.064 05:18:26 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:23.064 05:18:26 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:23.064 05:18:26 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:23.064 05:18:26 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:23.064 05:18:26 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:23.064 05:18:26 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:23.064 05:18:26 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:23.064 05:18:26 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:23.064 05:18:26 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:23.064 05:18:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:23.064 05:18:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:23.064 05:18:26 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:23.064 05:18:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:23.064 05:18:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:23.064 05:18:26 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:23.064 05:18:26 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:23.064 05:18:26 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:23.064 05:18:26 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:23.064 05:18:26 -- setup/hugepages.sh@78 -- # return 0 00:03:23.064 05:18:26 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:23.064 05:18:26 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:23.064 05:18:26 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:23.064 05:18:26 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:23.064 05:18:26 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:23.064 05:18:26 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:23.064 05:18:26 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:23.064 05:18:26 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:23.064 05:18:26 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:23.064 05:18:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:23.064 05:18:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:23.064 05:18:26 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:23.064 05:18:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:23.064 05:18:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:23.064 05:18:26 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:23.064 05:18:26 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:23.064 05:18:26 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:23.064 05:18:26 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:23.064 05:18:26 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:23.064 05:18:26 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:23.064 05:18:26 -- setup/hugepages.sh@78 -- # return 0 00:03:23.064 05:18:26 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:23.064 05:18:26 -- setup/hugepages.sh@187 -- # setup output 00:03:23.064 05:18:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.064 05:18:26 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:26.371 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:26.371 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:26.371 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:26.631 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:26.631 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:26.631 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:26.631 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:26.631 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:26.631 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:26.631 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:26.631 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:26.631 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:26.631 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:26.631 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:26.631 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:26.631 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:26.631 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:26.895 05:18:30 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:26.895 05:18:30 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:26.895 05:18:30 -- setup/hugepages.sh@89 -- # local node 00:03:26.895 05:18:30 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.895 05:18:30 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.895 05:18:30 -- setup/hugepages.sh@92 -- # local surp 00:03:26.895 05:18:30 -- setup/hugepages.sh@93 -- # local resv 00:03:26.895 05:18:30 -- setup/hugepages.sh@94 -- # local anon 00:03:26.895 05:18:30 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.895 05:18:30 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.895 05:18:30 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.895 05:18:30 -- setup/common.sh@18 -- # local node= 00:03:26.895 05:18:30 -- setup/common.sh@19 -- # local var val 00:03:26.895 05:18:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.895 05:18:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.895 05:18:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.895 05:18:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.895 05:18:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.895 05:18:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.895 05:18:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324764 kB' 'MemFree: 108679736 kB' 'MemAvailable: 112147028 kB' 'Buffers: 5168 kB' 'Cached: 9818444 kB' 'SwapCached: 0 kB' 'Active: 6602704 kB' 'Inactive: 3765728 kB' 'Active(anon): 6207360 kB' 'Inactive(anon): 0 kB' 'Active(file): 395344 kB' 'Inactive(file): 3765728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548088 kB' 'Mapped: 182056 kB' 'Shmem: 5662540 kB' 'KReclaimable: 260956 kB' 'Slab: 1348688 kB' 'SReclaimable: 260956 kB' 'SUnreclaim: 1087732 kB' 'KernelStack: 27104 kB' 'PageTables: 8000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69978120 kB' 'Committed_AS: 7496976 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235648 kB' 'VmallocChunk: 0 kB' 'Percpu: 107712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3108212 kB' 'DirectMap2M: 13348864 kB' 'DirectMap1G: 120586240 kB' 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.895 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.895 05:18:30 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.896 05:18:30 -- setup/common.sh@33 -- # echo 0 00:03:26.896 05:18:30 -- setup/common.sh@33 -- # return 0 00:03:26.896 05:18:30 -- setup/hugepages.sh@97 -- # anon=0 00:03:26.896 05:18:30 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.896 05:18:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.896 05:18:30 -- setup/common.sh@18 -- # local node= 00:03:26.896 05:18:30 -- setup/common.sh@19 -- # local var val 00:03:26.896 05:18:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.896 05:18:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.896 05:18:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.896 05:18:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.896 05:18:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.896 05:18:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324764 kB' 'MemFree: 108682300 kB' 'MemAvailable: 112149584 kB' 'Buffers: 5168 kB' 'Cached: 9818448 kB' 'SwapCached: 0 kB' 'Active: 6603132 kB' 'Inactive: 3765728 kB' 'Active(anon): 6207788 kB' 'Inactive(anon): 0 kB' 'Active(file): 395344 kB' 'Inactive(file): 3765728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548732 kB' 'Mapped: 182032 kB' 'Shmem: 5662544 kB' 'KReclaimable: 260940 kB' 'Slab: 1348620 kB' 'SReclaimable: 260940 kB' 'SUnreclaim: 1087680 kB' 'KernelStack: 27088 kB' 'PageTables: 7920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69978120 kB' 'Committed_AS: 7496988 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235600 kB' 'VmallocChunk: 0 kB' 'Percpu: 107712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3108212 kB' 'DirectMap2M: 13348864 kB' 'DirectMap1G: 120586240 kB' 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.896 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.896 05:18:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.897 05:18:30 -- setup/common.sh@33 -- # echo 0 00:03:26.897 05:18:30 -- setup/common.sh@33 -- # return 0 00:03:26.897 05:18:30 -- setup/hugepages.sh@99 -- # surp=0 00:03:26.897 05:18:30 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:26.897 05:18:30 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.897 05:18:30 -- setup/common.sh@18 -- # local node= 00:03:26.897 05:18:30 -- setup/common.sh@19 -- # local var val 00:03:26.897 05:18:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.897 05:18:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.897 05:18:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.897 05:18:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.897 05:18:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.897 05:18:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324764 kB' 'MemFree: 108684712 kB' 'MemAvailable: 112151996 kB' 'Buffers: 5168 kB' 'Cached: 9818452 kB' 'SwapCached: 0 kB' 'Active: 6604212 kB' 'Inactive: 3765728 kB' 'Active(anon): 6208868 kB' 'Inactive(anon): 0 kB' 'Active(file): 395344 kB' 'Inactive(file): 3765728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 550400 kB' 'Mapped: 182032 kB' 'Shmem: 5662548 kB' 'KReclaimable: 260940 kB' 'Slab: 1348716 kB' 'SReclaimable: 260940 kB' 'SUnreclaim: 1087776 kB' 'KernelStack: 27072 kB' 'PageTables: 7888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69978120 kB' 'Committed_AS: 7497004 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235616 kB' 'VmallocChunk: 0 kB' 'Percpu: 107712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3108212 kB' 'DirectMap2M: 13348864 kB' 'DirectMap1G: 120586240 kB' 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.897 05:18:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.897 05:18:30 -- setup/common.sh@33 -- # echo 0 00:03:26.897 05:18:30 -- setup/common.sh@33 -- # return 0 00:03:26.897 05:18:30 -- setup/hugepages.sh@100 -- # resv=0 00:03:26.897 05:18:30 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:26.897 nr_hugepages=1536 00:03:26.897 05:18:30 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.897 resv_hugepages=0 00:03:26.897 05:18:30 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.897 surplus_hugepages=0 00:03:26.897 05:18:30 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:26.897 anon_hugepages=0 00:03:26.897 05:18:30 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:26.897 05:18:30 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:26.897 05:18:30 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.897 05:18:30 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.897 05:18:30 -- setup/common.sh@18 -- # local node= 00:03:26.897 05:18:30 -- setup/common.sh@19 -- # local var val 00:03:26.897 05:18:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.897 05:18:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.897 05:18:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.897 05:18:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.897 05:18:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.897 05:18:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.897 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324764 kB' 'MemFree: 108684220 kB' 'MemAvailable: 112151504 kB' 'Buffers: 5168 kB' 'Cached: 9818472 kB' 'SwapCached: 0 kB' 'Active: 6606244 kB' 'Inactive: 3765728 kB' 'Active(anon): 6210900 kB' 'Inactive(anon): 0 kB' 'Active(file): 395344 kB' 'Inactive(file): 3765728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552196 kB' 'Mapped: 182032 kB' 'Shmem: 5662568 kB' 'KReclaimable: 260940 kB' 'Slab: 1348716 kB' 'SReclaimable: 260940 kB' 'SUnreclaim: 1087776 kB' 'KernelStack: 27136 kB' 'PageTables: 8072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69978120 kB' 'Committed_AS: 7501804 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235616 kB' 'VmallocChunk: 0 kB' 'Percpu: 107712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3108212 kB' 'DirectMap2M: 13348864 kB' 'DirectMap1G: 120586240 kB' 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # continue 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.898 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.898 05:18:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.898 05:18:30 -- setup/common.sh@33 -- # echo 1536 00:03:26.898 05:18:30 -- setup/common.sh@33 -- # return 0 00:03:26.898 05:18:30 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:26.898 05:18:30 -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.898 05:18:30 -- setup/hugepages.sh@27 -- # local node 00:03:26.898 05:18:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.898 05:18:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:26.898 05:18:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.898 05:18:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:26.898 05:18:30 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:26.898 05:18:30 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.898 05:18:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.898 05:18:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.898 05:18:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.898 05:18:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.898 05:18:30 -- setup/common.sh@18 -- # local node=0 00:03:26.898 05:18:30 -- setup/common.sh@19 -- # local var val 00:03:26.898 05:18:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.898 05:18:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.161 05:18:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:27.161 05:18:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:27.161 05:18:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.161 05:18:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.161 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.161 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.161 05:18:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65652964 kB' 'MemFree: 60854860 kB' 'MemUsed: 4798104 kB' 'SwapCached: 0 kB' 'Active: 1811392 kB' 'Inactive: 175512 kB' 'Active(anon): 1618324 kB' 'Inactive(anon): 0 kB' 'Active(file): 193068 kB' 'Inactive(file): 175512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1898500 kB' 'Mapped: 68400 kB' 'AnonPages: 91900 kB' 'Shmem: 1529920 kB' 'KernelStack: 12168 kB' 'PageTables: 3516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 162764 kB' 'Slab: 705828 kB' 'SReclaimable: 162764 kB' 'SUnreclaim: 543064 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:27.161 05:18:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.161 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.161 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.161 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.161 05:18:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.161 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.161 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.161 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.161 05:18:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.161 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.161 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.161 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.161 05:18:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.161 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.161 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.161 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.161 05:18:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.161 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.161 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.161 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.161 05:18:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.161 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.161 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.161 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.161 05:18:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.161 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.161 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.161 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.161 05:18:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.161 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.161 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.161 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.161 05:18:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.161 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.161 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.161 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.161 05:18:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.161 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.161 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.161 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.161 05:18:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.161 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.161 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.161 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.161 05:18:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.161 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.161 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.161 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.161 05:18:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.162 05:18:30 -- setup/common.sh@33 -- # echo 0 00:03:27.162 05:18:30 -- setup/common.sh@33 -- # return 0 00:03:27.162 05:18:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.162 05:18:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.162 05:18:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.162 05:18:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:27.162 05:18:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.162 05:18:30 -- setup/common.sh@18 -- # local node=1 00:03:27.162 05:18:30 -- setup/common.sh@19 -- # local var val 00:03:27.162 05:18:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.162 05:18:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.162 05:18:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:27.162 05:18:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:27.162 05:18:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.162 05:18:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.162 05:18:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60671800 kB' 'MemFree: 47833448 kB' 'MemUsed: 12838352 kB' 'SwapCached: 0 kB' 'Active: 4794940 kB' 'Inactive: 3590216 kB' 'Active(anon): 4592664 kB' 'Inactive(anon): 0 kB' 'Active(file): 202276 kB' 'Inactive(file): 3590216 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7925156 kB' 'Mapped: 113632 kB' 'AnonPages: 460472 kB' 'Shmem: 4132664 kB' 'KernelStack: 14968 kB' 'PageTables: 4556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98168 kB' 'Slab: 642888 kB' 'SReclaimable: 98168 kB' 'SUnreclaim: 544720 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.162 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.162 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # continue 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.163 05:18:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.163 05:18:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.163 05:18:30 -- setup/common.sh@33 -- # echo 0 00:03:27.163 05:18:30 -- setup/common.sh@33 -- # return 0 00:03:27.163 05:18:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.163 05:18:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.163 05:18:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.163 05:18:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.163 05:18:30 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:27.163 node0=512 expecting 512 00:03:27.163 05:18:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.163 05:18:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.163 05:18:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.163 05:18:30 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:27.163 node1=1024 expecting 1024 00:03:27.163 05:18:30 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:27.163 00:03:27.163 real 0m4.109s 00:03:27.163 user 0m1.623s 00:03:27.163 sys 0m2.560s 00:03:27.163 05:18:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:27.163 05:18:30 -- common/autotest_common.sh@10 -- # set +x 00:03:27.163 ************************************ 00:03:27.163 END TEST custom_alloc 00:03:27.163 ************************************ 00:03:27.163 05:18:30 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:27.163 05:18:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:27.163 05:18:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:27.163 05:18:30 -- common/autotest_common.sh@10 -- # set +x 00:03:27.163 ************************************ 00:03:27.163 START TEST no_shrink_alloc 00:03:27.163 ************************************ 00:03:27.163 05:18:30 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:03:27.163 05:18:30 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:27.163 05:18:30 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:27.163 05:18:30 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:27.163 05:18:30 -- setup/hugepages.sh@51 -- # shift 00:03:27.163 05:18:30 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:27.163 05:18:30 -- setup/hugepages.sh@52 -- # local node_ids 00:03:27.163 05:18:30 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:27.163 05:18:30 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:27.163 05:18:30 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:27.163 05:18:30 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:27.163 05:18:30 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:27.163 05:18:30 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:27.163 05:18:30 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:27.163 05:18:30 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:27.163 05:18:30 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:27.163 05:18:30 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:27.163 05:18:30 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:27.163 05:18:30 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:27.163 05:18:30 -- setup/hugepages.sh@73 -- # return 0 00:03:27.163 05:18:30 -- setup/hugepages.sh@198 -- # setup output 00:03:27.163 05:18:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.163 05:18:30 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:31.375 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:31.375 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:31.375 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:31.375 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:31.375 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:31.375 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:31.375 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:31.375 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:31.375 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:31.375 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:31.375 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:31.375 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:31.375 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:31.375 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:31.375 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:31.375 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:31.375 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:31.375 05:18:34 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:31.375 05:18:34 -- setup/hugepages.sh@89 -- # local node 00:03:31.375 05:18:34 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:31.375 05:18:34 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:31.375 05:18:34 -- setup/hugepages.sh@92 -- # local surp 00:03:31.375 05:18:34 -- setup/hugepages.sh@93 -- # local resv 00:03:31.375 05:18:34 -- setup/hugepages.sh@94 -- # local anon 00:03:31.375 05:18:34 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:31.375 05:18:34 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:31.375 05:18:34 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:31.375 05:18:34 -- setup/common.sh@18 -- # local node= 00:03:31.375 05:18:34 -- setup/common.sh@19 -- # local var val 00:03:31.375 05:18:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:31.375 05:18:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.375 05:18:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.375 05:18:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.375 05:18:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.375 05:18:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.375 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.375 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.375 05:18:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324764 kB' 'MemFree: 109731588 kB' 'MemAvailable: 113198864 kB' 'Buffers: 5168 kB' 'Cached: 9818600 kB' 'SwapCached: 0 kB' 'Active: 6604652 kB' 'Inactive: 3765728 kB' 'Active(anon): 6209308 kB' 'Inactive(anon): 0 kB' 'Active(file): 395344 kB' 'Inactive(file): 3765728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549460 kB' 'Mapped: 182072 kB' 'Shmem: 5662696 kB' 'KReclaimable: 260924 kB' 'Slab: 1349300 kB' 'SReclaimable: 260924 kB' 'SUnreclaim: 1088376 kB' 'KernelStack: 27216 kB' 'PageTables: 8340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502408 kB' 'Committed_AS: 7501332 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235808 kB' 'VmallocChunk: 0 kB' 'Percpu: 107712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3108212 kB' 'DirectMap2M: 13348864 kB' 'DirectMap1G: 120586240 kB' 00:03:31.375 05:18:34 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.375 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.375 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.375 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.375 05:18:34 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.375 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.375 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.375 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.375 05:18:34 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.375 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.375 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.375 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.375 05:18:34 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.375 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.375 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.375 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.375 05:18:34 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.375 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.375 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.375 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.375 05:18:34 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.375 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.375 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.375 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.375 05:18:34 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.376 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.376 05:18:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.376 05:18:34 -- setup/common.sh@33 -- # echo 0 00:03:31.376 05:18:34 -- setup/common.sh@33 -- # return 0 00:03:31.376 05:18:34 -- setup/hugepages.sh@97 -- # anon=0 00:03:31.376 05:18:34 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:31.376 05:18:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.376 05:18:34 -- setup/common.sh@18 -- # local node= 00:03:31.376 05:18:34 -- setup/common.sh@19 -- # local var val 00:03:31.376 05:18:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:31.376 05:18:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.377 05:18:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.377 05:18:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.377 05:18:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.377 05:18:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.377 05:18:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324764 kB' 'MemFree: 109731172 kB' 'MemAvailable: 113198448 kB' 'Buffers: 5168 kB' 'Cached: 9818608 kB' 'SwapCached: 0 kB' 'Active: 6604284 kB' 'Inactive: 3765728 kB' 'Active(anon): 6208940 kB' 'Inactive(anon): 0 kB' 'Active(file): 395344 kB' 'Inactive(file): 3765728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549620 kB' 'Mapped: 182060 kB' 'Shmem: 5662704 kB' 'KReclaimable: 260924 kB' 'Slab: 1349300 kB' 'SReclaimable: 260924 kB' 'SUnreclaim: 1088376 kB' 'KernelStack: 27248 kB' 'PageTables: 8268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502408 kB' 'Committed_AS: 7502988 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235856 kB' 'VmallocChunk: 0 kB' 'Percpu: 107712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3108212 kB' 'DirectMap2M: 13348864 kB' 'DirectMap1G: 120586240 kB' 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.377 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.377 05:18:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.378 05:18:34 -- setup/common.sh@33 -- # echo 0 00:03:31.378 05:18:34 -- setup/common.sh@33 -- # return 0 00:03:31.378 05:18:34 -- setup/hugepages.sh@99 -- # surp=0 00:03:31.378 05:18:34 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:31.378 05:18:34 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:31.378 05:18:34 -- setup/common.sh@18 -- # local node= 00:03:31.378 05:18:34 -- setup/common.sh@19 -- # local var val 00:03:31.378 05:18:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:31.378 05:18:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.378 05:18:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.378 05:18:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.378 05:18:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.378 05:18:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.378 05:18:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324764 kB' 'MemFree: 109729968 kB' 'MemAvailable: 113197244 kB' 'Buffers: 5168 kB' 'Cached: 9818608 kB' 'SwapCached: 0 kB' 'Active: 6604488 kB' 'Inactive: 3765728 kB' 'Active(anon): 6209144 kB' 'Inactive(anon): 0 kB' 'Active(file): 395344 kB' 'Inactive(file): 3765728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549824 kB' 'Mapped: 182060 kB' 'Shmem: 5662704 kB' 'KReclaimable: 260924 kB' 'Slab: 1349336 kB' 'SReclaimable: 260924 kB' 'SUnreclaim: 1088412 kB' 'KernelStack: 27392 kB' 'PageTables: 8500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502408 kB' 'Committed_AS: 7503004 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235856 kB' 'VmallocChunk: 0 kB' 'Percpu: 107712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3108212 kB' 'DirectMap2M: 13348864 kB' 'DirectMap1G: 120586240 kB' 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.378 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.378 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.379 05:18:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.379 05:18:34 -- setup/common.sh@33 -- # echo 0 00:03:31.379 05:18:34 -- setup/common.sh@33 -- # return 0 00:03:31.379 05:18:34 -- setup/hugepages.sh@100 -- # resv=0 00:03:31.379 05:18:34 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:31.379 nr_hugepages=1024 00:03:31.379 05:18:34 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:31.379 resv_hugepages=0 00:03:31.379 05:18:34 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:31.379 surplus_hugepages=0 00:03:31.379 05:18:34 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:31.379 anon_hugepages=0 00:03:31.379 05:18:34 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:31.379 05:18:34 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:31.379 05:18:34 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:31.379 05:18:34 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:31.379 05:18:34 -- setup/common.sh@18 -- # local node= 00:03:31.379 05:18:34 -- setup/common.sh@19 -- # local var val 00:03:31.379 05:18:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:31.379 05:18:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.379 05:18:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.379 05:18:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.379 05:18:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.379 05:18:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.379 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.380 05:18:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324764 kB' 'MemFree: 109729348 kB' 'MemAvailable: 113196624 kB' 'Buffers: 5168 kB' 'Cached: 9818632 kB' 'SwapCached: 0 kB' 'Active: 6604268 kB' 'Inactive: 3765728 kB' 'Active(anon): 6208924 kB' 'Inactive(anon): 0 kB' 'Active(file): 395344 kB' 'Inactive(file): 3765728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549036 kB' 'Mapped: 182060 kB' 'Shmem: 5662728 kB' 'KReclaimable: 260924 kB' 'Slab: 1349336 kB' 'SReclaimable: 260924 kB' 'SUnreclaim: 1088412 kB' 'KernelStack: 27168 kB' 'PageTables: 8284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502408 kB' 'Committed_AS: 7503020 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235808 kB' 'VmallocChunk: 0 kB' 'Percpu: 107712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3108212 kB' 'DirectMap2M: 13348864 kB' 'DirectMap1G: 120586240 kB' 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.380 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.380 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.381 05:18:34 -- setup/common.sh@33 -- # echo 1024 00:03:31.381 05:18:34 -- setup/common.sh@33 -- # return 0 00:03:31.381 05:18:34 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:31.381 05:18:34 -- setup/hugepages.sh@112 -- # get_nodes 00:03:31.381 05:18:34 -- setup/hugepages.sh@27 -- # local node 00:03:31.381 05:18:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.381 05:18:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:31.381 05:18:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.381 05:18:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:31.381 05:18:34 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:31.381 05:18:34 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:31.381 05:18:34 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:31.381 05:18:34 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:31.381 05:18:34 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:31.381 05:18:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.381 05:18:34 -- setup/common.sh@18 -- # local node=0 00:03:31.381 05:18:34 -- setup/common.sh@19 -- # local var val 00:03:31.381 05:18:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:31.381 05:18:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.381 05:18:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:31.381 05:18:34 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:31.381 05:18:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.381 05:18:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.381 05:18:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65652964 kB' 'MemFree: 59799704 kB' 'MemUsed: 5853260 kB' 'SwapCached: 0 kB' 'Active: 1809432 kB' 'Inactive: 175512 kB' 'Active(anon): 1616364 kB' 'Inactive(anon): 0 kB' 'Active(file): 193068 kB' 'Inactive(file): 175512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1898504 kB' 'Mapped: 68428 kB' 'AnonPages: 89596 kB' 'Shmem: 1529924 kB' 'KernelStack: 12088 kB' 'PageTables: 3152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 162756 kB' 'Slab: 705972 kB' 'SReclaimable: 162756 kB' 'SUnreclaim: 543216 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.381 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.381 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # continue 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.382 05:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.382 05:18:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.382 05:18:34 -- setup/common.sh@33 -- # echo 0 00:03:31.382 05:18:34 -- setup/common.sh@33 -- # return 0 00:03:31.382 05:18:34 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:31.382 05:18:34 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:31.382 05:18:34 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:31.382 05:18:34 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:31.382 05:18:34 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:31.382 node0=1024 expecting 1024 00:03:31.382 05:18:34 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:31.382 05:18:34 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:31.382 05:18:34 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:31.382 05:18:34 -- setup/hugepages.sh@202 -- # setup output 00:03:31.382 05:18:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.382 05:18:34 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:34.686 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:34.686 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:34.686 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:34.686 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:34.686 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:34.686 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:34.686 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:34.686 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:34.686 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:34.686 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:34.686 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:34.686 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:34.686 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:34.686 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:34.686 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:34.686 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:34.686 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:35.262 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:35.262 05:18:38 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:35.262 05:18:38 -- setup/hugepages.sh@89 -- # local node 00:03:35.262 05:18:38 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:35.262 05:18:38 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:35.262 05:18:38 -- setup/hugepages.sh@92 -- # local surp 00:03:35.262 05:18:38 -- setup/hugepages.sh@93 -- # local resv 00:03:35.262 05:18:38 -- setup/hugepages.sh@94 -- # local anon 00:03:35.262 05:18:38 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:35.262 05:18:38 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:35.262 05:18:38 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:35.262 05:18:38 -- setup/common.sh@18 -- # local node= 00:03:35.262 05:18:38 -- setup/common.sh@19 -- # local var val 00:03:35.262 05:18:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.262 05:18:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.262 05:18:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.262 05:18:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.262 05:18:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.262 05:18:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.262 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.262 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.262 05:18:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324764 kB' 'MemFree: 109749040 kB' 'MemAvailable: 113216316 kB' 'Buffers: 5168 kB' 'Cached: 9818736 kB' 'SwapCached: 0 kB' 'Active: 6604436 kB' 'Inactive: 3765728 kB' 'Active(anon): 6209092 kB' 'Inactive(anon): 0 kB' 'Active(file): 395344 kB' 'Inactive(file): 3765728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549116 kB' 'Mapped: 180968 kB' 'Shmem: 5662832 kB' 'KReclaimable: 260924 kB' 'Slab: 1348648 kB' 'SReclaimable: 260924 kB' 'SUnreclaim: 1087724 kB' 'KernelStack: 27088 kB' 'PageTables: 7996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502408 kB' 'Committed_AS: 7464452 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235616 kB' 'VmallocChunk: 0 kB' 'Percpu: 107712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3108212 kB' 'DirectMap2M: 13348864 kB' 'DirectMap1G: 120586240 kB' 00:03:35.262 05:18:38 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.262 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.262 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.262 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.262 05:18:38 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.262 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.262 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.262 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.263 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.263 05:18:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.263 05:18:38 -- setup/common.sh@33 -- # echo 0 00:03:35.263 05:18:38 -- setup/common.sh@33 -- # return 0 00:03:35.263 05:18:38 -- setup/hugepages.sh@97 -- # anon=0 00:03:35.263 05:18:38 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:35.263 05:18:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.263 05:18:38 -- setup/common.sh@18 -- # local node= 00:03:35.263 05:18:38 -- setup/common.sh@19 -- # local var val 00:03:35.263 05:18:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.263 05:18:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.263 05:18:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.263 05:18:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.263 05:18:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.264 05:18:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324764 kB' 'MemFree: 109748872 kB' 'MemAvailable: 113216148 kB' 'Buffers: 5168 kB' 'Cached: 9818740 kB' 'SwapCached: 0 kB' 'Active: 6603888 kB' 'Inactive: 3765728 kB' 'Active(anon): 6208544 kB' 'Inactive(anon): 0 kB' 'Active(file): 395344 kB' 'Inactive(file): 3765728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548992 kB' 'Mapped: 180968 kB' 'Shmem: 5662836 kB' 'KReclaimable: 260924 kB' 'Slab: 1348716 kB' 'SReclaimable: 260924 kB' 'SUnreclaim: 1087792 kB' 'KernelStack: 27040 kB' 'PageTables: 7852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502408 kB' 'Committed_AS: 7464464 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235600 kB' 'VmallocChunk: 0 kB' 'Percpu: 107712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3108212 kB' 'DirectMap2M: 13348864 kB' 'DirectMap1G: 120586240 kB' 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.264 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.264 05:18:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.265 05:18:38 -- setup/common.sh@33 -- # echo 0 00:03:35.265 05:18:38 -- setup/common.sh@33 -- # return 0 00:03:35.265 05:18:38 -- setup/hugepages.sh@99 -- # surp=0 00:03:35.265 05:18:38 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:35.265 05:18:38 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:35.265 05:18:38 -- setup/common.sh@18 -- # local node= 00:03:35.265 05:18:38 -- setup/common.sh@19 -- # local var val 00:03:35.265 05:18:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.265 05:18:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.265 05:18:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.265 05:18:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.265 05:18:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.265 05:18:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.265 05:18:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324764 kB' 'MemFree: 109749308 kB' 'MemAvailable: 113216584 kB' 'Buffers: 5168 kB' 'Cached: 9818752 kB' 'SwapCached: 0 kB' 'Active: 6603844 kB' 'Inactive: 3765728 kB' 'Active(anon): 6208500 kB' 'Inactive(anon): 0 kB' 'Active(file): 395344 kB' 'Inactive(file): 3765728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549028 kB' 'Mapped: 180968 kB' 'Shmem: 5662848 kB' 'KReclaimable: 260924 kB' 'Slab: 1348716 kB' 'SReclaimable: 260924 kB' 'SUnreclaim: 1087792 kB' 'KernelStack: 27056 kB' 'PageTables: 7900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502408 kB' 'Committed_AS: 7464480 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235600 kB' 'VmallocChunk: 0 kB' 'Percpu: 107712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3108212 kB' 'DirectMap2M: 13348864 kB' 'DirectMap1G: 120586240 kB' 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.265 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.265 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.266 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.266 05:18:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.266 05:18:38 -- setup/common.sh@33 -- # echo 0 00:03:35.266 05:18:38 -- setup/common.sh@33 -- # return 0 00:03:35.266 05:18:38 -- setup/hugepages.sh@100 -- # resv=0 00:03:35.266 05:18:38 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:35.266 nr_hugepages=1024 00:03:35.266 05:18:38 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:35.266 resv_hugepages=0 00:03:35.266 05:18:38 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:35.266 surplus_hugepages=0 00:03:35.266 05:18:38 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:35.266 anon_hugepages=0 00:03:35.266 05:18:38 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:35.266 05:18:38 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:35.266 05:18:38 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:35.266 05:18:38 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:35.266 05:18:38 -- setup/common.sh@18 -- # local node= 00:03:35.266 05:18:38 -- setup/common.sh@19 -- # local var val 00:03:35.266 05:18:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.266 05:18:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.266 05:18:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.266 05:18:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.266 05:18:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.266 05:18:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.267 05:18:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126324764 kB' 'MemFree: 109748552 kB' 'MemAvailable: 113215828 kB' 'Buffers: 5168 kB' 'Cached: 9818776 kB' 'SwapCached: 0 kB' 'Active: 6603516 kB' 'Inactive: 3765728 kB' 'Active(anon): 6208172 kB' 'Inactive(anon): 0 kB' 'Active(file): 395344 kB' 'Inactive(file): 3765728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548624 kB' 'Mapped: 180968 kB' 'Shmem: 5662872 kB' 'KReclaimable: 260924 kB' 'Slab: 1348716 kB' 'SReclaimable: 260924 kB' 'SUnreclaim: 1087792 kB' 'KernelStack: 27040 kB' 'PageTables: 7852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70502408 kB' 'Committed_AS: 7464496 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235600 kB' 'VmallocChunk: 0 kB' 'Percpu: 107712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3108212 kB' 'DirectMap2M: 13348864 kB' 'DirectMap1G: 120586240 kB' 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.267 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.267 05:18:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.268 05:18:38 -- setup/common.sh@33 -- # echo 1024 00:03:35.268 05:18:38 -- setup/common.sh@33 -- # return 0 00:03:35.268 05:18:38 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:35.268 05:18:38 -- setup/hugepages.sh@112 -- # get_nodes 00:03:35.268 05:18:38 -- setup/hugepages.sh@27 -- # local node 00:03:35.268 05:18:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.268 05:18:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:35.268 05:18:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.268 05:18:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:35.268 05:18:38 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:35.268 05:18:38 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:35.268 05:18:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:35.268 05:18:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:35.268 05:18:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:35.268 05:18:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.268 05:18:38 -- setup/common.sh@18 -- # local node=0 00:03:35.268 05:18:38 -- setup/common.sh@19 -- # local var val 00:03:35.268 05:18:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.268 05:18:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.268 05:18:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:35.268 05:18:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:35.268 05:18:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.268 05:18:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.268 05:18:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65652964 kB' 'MemFree: 59807016 kB' 'MemUsed: 5845948 kB' 'SwapCached: 0 kB' 'Active: 1809472 kB' 'Inactive: 175512 kB' 'Active(anon): 1616404 kB' 'Inactive(anon): 0 kB' 'Active(file): 193068 kB' 'Inactive(file): 175512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1898504 kB' 'Mapped: 68412 kB' 'AnonPages: 89636 kB' 'Shmem: 1529924 kB' 'KernelStack: 12136 kB' 'PageTables: 3420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 162756 kB' 'Slab: 705800 kB' 'SReclaimable: 162756 kB' 'SUnreclaim: 543044 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.268 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.268 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # continue 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.269 05:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.269 05:18:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.269 05:18:38 -- setup/common.sh@33 -- # echo 0 00:03:35.269 05:18:38 -- setup/common.sh@33 -- # return 0 00:03:35.269 05:18:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:35.269 05:18:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:35.269 05:18:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:35.269 05:18:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:35.269 05:18:38 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:35.269 node0=1024 expecting 1024 00:03:35.269 05:18:38 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:35.269 00:03:35.269 real 0m8.203s 00:03:35.269 user 0m3.141s 00:03:35.269 sys 0m5.199s 00:03:35.269 05:18:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:35.269 05:18:38 -- common/autotest_common.sh@10 -- # set +x 00:03:35.269 ************************************ 00:03:35.269 END TEST no_shrink_alloc 00:03:35.269 ************************************ 00:03:35.269 05:18:38 -- setup/hugepages.sh@217 -- # clear_hp 00:03:35.269 05:18:38 -- setup/hugepages.sh@37 -- # local node hp 00:03:35.269 05:18:38 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:35.269 05:18:38 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:35.269 05:18:38 -- setup/hugepages.sh@41 -- # echo 0 00:03:35.269 05:18:38 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:35.269 05:18:38 -- setup/hugepages.sh@41 -- # echo 0 00:03:35.269 05:18:38 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:35.269 05:18:38 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:35.269 05:18:38 -- setup/hugepages.sh@41 -- # echo 0 00:03:35.269 05:18:38 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:35.269 05:18:38 -- setup/hugepages.sh@41 -- # echo 0 00:03:35.269 05:18:38 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:35.269 05:18:38 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:35.269 00:03:35.269 real 0m29.614s 00:03:35.269 user 0m11.528s 00:03:35.269 sys 0m18.585s 00:03:35.269 05:18:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:35.269 05:18:38 -- common/autotest_common.sh@10 -- # set +x 00:03:35.269 ************************************ 00:03:35.269 END TEST hugepages 00:03:35.269 ************************************ 00:03:35.531 05:18:38 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:35.531 05:18:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:35.531 05:18:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:35.531 05:18:38 -- common/autotest_common.sh@10 -- # set +x 00:03:35.531 ************************************ 00:03:35.531 START TEST driver 00:03:35.531 ************************************ 00:03:35.531 05:18:38 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:35.531 * Looking for test storage... 00:03:35.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:35.531 05:18:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:35.531 05:18:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:35.531 05:18:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:35.531 05:18:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:35.531 05:18:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:35.531 05:18:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:35.531 05:18:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:35.531 05:18:38 -- scripts/common.sh@335 -- # IFS=.-: 00:03:35.531 05:18:38 -- scripts/common.sh@335 -- # read -ra ver1 00:03:35.531 05:18:38 -- scripts/common.sh@336 -- # IFS=.-: 00:03:35.531 05:18:38 -- scripts/common.sh@336 -- # read -ra ver2 00:03:35.531 05:18:38 -- scripts/common.sh@337 -- # local 'op=<' 00:03:35.531 05:18:38 -- scripts/common.sh@339 -- # ver1_l=2 00:03:35.531 05:18:38 -- scripts/common.sh@340 -- # ver2_l=1 00:03:35.531 05:18:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:35.531 05:18:38 -- scripts/common.sh@343 -- # case "$op" in 00:03:35.531 05:18:38 -- scripts/common.sh@344 -- # : 1 00:03:35.531 05:18:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:35.531 05:18:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:35.531 05:18:38 -- scripts/common.sh@364 -- # decimal 1 00:03:35.531 05:18:38 -- scripts/common.sh@352 -- # local d=1 00:03:35.531 05:18:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:35.531 05:18:38 -- scripts/common.sh@354 -- # echo 1 00:03:35.531 05:18:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:35.531 05:18:38 -- scripts/common.sh@365 -- # decimal 2 00:03:35.531 05:18:38 -- scripts/common.sh@352 -- # local d=2 00:03:35.531 05:18:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:35.531 05:18:38 -- scripts/common.sh@354 -- # echo 2 00:03:35.531 05:18:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:35.531 05:18:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:35.531 05:18:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:35.531 05:18:38 -- scripts/common.sh@367 -- # return 0 00:03:35.531 05:18:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:35.531 05:18:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:35.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.531 --rc genhtml_branch_coverage=1 00:03:35.531 --rc genhtml_function_coverage=1 00:03:35.531 --rc genhtml_legend=1 00:03:35.531 --rc geninfo_all_blocks=1 00:03:35.531 --rc geninfo_unexecuted_blocks=1 00:03:35.531 00:03:35.531 ' 00:03:35.531 05:18:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:35.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.531 --rc genhtml_branch_coverage=1 00:03:35.531 --rc genhtml_function_coverage=1 00:03:35.531 --rc genhtml_legend=1 00:03:35.531 --rc geninfo_all_blocks=1 00:03:35.531 --rc geninfo_unexecuted_blocks=1 00:03:35.531 00:03:35.531 ' 00:03:35.531 05:18:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:35.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.531 --rc genhtml_branch_coverage=1 00:03:35.531 --rc genhtml_function_coverage=1 00:03:35.531 --rc genhtml_legend=1 00:03:35.531 --rc geninfo_all_blocks=1 00:03:35.531 --rc geninfo_unexecuted_blocks=1 00:03:35.531 00:03:35.531 ' 00:03:35.531 05:18:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:35.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.531 --rc genhtml_branch_coverage=1 00:03:35.531 --rc genhtml_function_coverage=1 00:03:35.531 --rc genhtml_legend=1 00:03:35.531 --rc geninfo_all_blocks=1 00:03:35.531 --rc geninfo_unexecuted_blocks=1 00:03:35.531 00:03:35.531 ' 00:03:35.531 05:18:38 -- setup/driver.sh@68 -- # setup reset 00:03:35.531 05:18:38 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:35.531 05:18:38 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:40.822 05:18:44 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:40.822 05:18:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:40.822 05:18:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:40.822 05:18:44 -- common/autotest_common.sh@10 -- # set +x 00:03:40.822 ************************************ 00:03:40.822 START TEST guess_driver 00:03:40.822 ************************************ 00:03:40.822 05:18:44 -- common/autotest_common.sh@1114 -- # guess_driver 00:03:40.822 05:18:44 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:40.822 05:18:44 -- setup/driver.sh@47 -- # local fail=0 00:03:40.822 05:18:44 -- setup/driver.sh@49 -- # pick_driver 00:03:40.822 05:18:44 -- setup/driver.sh@36 -- # vfio 00:03:40.822 05:18:44 -- setup/driver.sh@21 -- # local iommu_grups 00:03:40.822 05:18:44 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:40.822 05:18:44 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:41.083 05:18:44 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:41.083 05:18:44 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:41.083 05:18:44 -- setup/driver.sh@29 -- # (( 322 > 0 )) 00:03:41.083 05:18:44 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:41.083 05:18:44 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:41.083 05:18:44 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:41.083 05:18:44 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:41.083 05:18:44 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:41.083 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:41.084 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:41.084 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:41.084 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:41.084 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:41.084 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:41.084 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:41.084 05:18:44 -- setup/driver.sh@30 -- # return 0 00:03:41.084 05:18:44 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:41.084 05:18:44 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:41.084 05:18:44 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:41.084 05:18:44 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:41.084 Looking for driver=vfio-pci 00:03:41.084 05:18:44 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.084 05:18:44 -- setup/driver.sh@45 -- # setup output config 00:03:41.084 05:18:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.084 05:18:44 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:45.292 05:18:47 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.292 05:18:47 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.292 05:18:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.292 05:18:47 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.292 05:18:47 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.292 05:18:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.292 05:18:47 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.292 05:18:47 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.292 05:18:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.292 05:18:47 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.292 05:18:47 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.292 05:18:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.292 05:18:47 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.292 05:18:47 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.292 05:18:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.292 05:18:47 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.292 05:18:47 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.292 05:18:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.292 05:18:47 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.292 05:18:47 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.292 05:18:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.292 05:18:47 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.292 05:18:47 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.292 05:18:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.292 05:18:47 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.292 05:18:47 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.292 05:18:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.292 05:18:47 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.292 05:18:47 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.292 05:18:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.292 05:18:47 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.292 05:18:47 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.292 05:18:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.292 05:18:47 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.292 05:18:47 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.292 05:18:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.292 05:18:47 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.292 05:18:47 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.292 05:18:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.292 05:18:47 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.292 05:18:47 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.292 05:18:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.292 05:18:47 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.292 05:18:47 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.292 05:18:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.292 05:18:47 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.292 05:18:47 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.292 05:18:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.292 05:18:47 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.292 05:18:47 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.292 05:18:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.292 05:18:48 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:45.292 05:18:48 -- setup/driver.sh@65 -- # setup reset 00:03:45.292 05:18:48 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:45.293 05:18:48 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:50.606 00:03:50.606 real 0m9.571s 00:03:50.606 user 0m3.114s 00:03:50.606 sys 0m5.632s 00:03:50.606 05:18:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:50.606 05:18:53 -- common/autotest_common.sh@10 -- # set +x 00:03:50.606 ************************************ 00:03:50.606 END TEST guess_driver 00:03:50.606 ************************************ 00:03:50.606 00:03:50.606 real 0m15.157s 00:03:50.606 user 0m4.854s 00:03:50.606 sys 0m8.645s 00:03:50.606 05:18:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:50.606 05:18:53 -- common/autotest_common.sh@10 -- # set +x 00:03:50.606 ************************************ 00:03:50.606 END TEST driver 00:03:50.606 ************************************ 00:03:50.606 05:18:53 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:50.606 05:18:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:50.606 05:18:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:50.606 05:18:53 -- common/autotest_common.sh@10 -- # set +x 00:03:50.606 ************************************ 00:03:50.606 START TEST devices 00:03:50.606 ************************************ 00:03:50.606 05:18:53 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:50.606 * Looking for test storage... 00:03:50.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:50.606 05:18:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:50.606 05:18:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:50.606 05:18:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:50.867 05:18:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:50.867 05:18:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:50.867 05:18:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:50.867 05:18:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:50.867 05:18:53 -- scripts/common.sh@335 -- # IFS=.-: 00:03:50.867 05:18:53 -- scripts/common.sh@335 -- # read -ra ver1 00:03:50.867 05:18:53 -- scripts/common.sh@336 -- # IFS=.-: 00:03:50.867 05:18:53 -- scripts/common.sh@336 -- # read -ra ver2 00:03:50.867 05:18:53 -- scripts/common.sh@337 -- # local 'op=<' 00:03:50.867 05:18:53 -- scripts/common.sh@339 -- # ver1_l=2 00:03:50.867 05:18:53 -- scripts/common.sh@340 -- # ver2_l=1 00:03:50.867 05:18:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:50.867 05:18:53 -- scripts/common.sh@343 -- # case "$op" in 00:03:50.867 05:18:53 -- scripts/common.sh@344 -- # : 1 00:03:50.867 05:18:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:50.867 05:18:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:50.867 05:18:53 -- scripts/common.sh@364 -- # decimal 1 00:03:50.867 05:18:53 -- scripts/common.sh@352 -- # local d=1 00:03:50.867 05:18:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:50.867 05:18:53 -- scripts/common.sh@354 -- # echo 1 00:03:50.867 05:18:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:50.867 05:18:53 -- scripts/common.sh@365 -- # decimal 2 00:03:50.867 05:18:53 -- scripts/common.sh@352 -- # local d=2 00:03:50.867 05:18:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:50.867 05:18:53 -- scripts/common.sh@354 -- # echo 2 00:03:50.867 05:18:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:50.867 05:18:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:50.867 05:18:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:50.867 05:18:53 -- scripts/common.sh@367 -- # return 0 00:03:50.867 05:18:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:50.867 05:18:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:50.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.867 --rc genhtml_branch_coverage=1 00:03:50.867 --rc genhtml_function_coverage=1 00:03:50.867 --rc genhtml_legend=1 00:03:50.867 --rc geninfo_all_blocks=1 00:03:50.868 --rc geninfo_unexecuted_blocks=1 00:03:50.868 00:03:50.868 ' 00:03:50.868 05:18:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:50.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.868 --rc genhtml_branch_coverage=1 00:03:50.868 --rc genhtml_function_coverage=1 00:03:50.868 --rc genhtml_legend=1 00:03:50.868 --rc geninfo_all_blocks=1 00:03:50.868 --rc geninfo_unexecuted_blocks=1 00:03:50.868 00:03:50.868 ' 00:03:50.868 05:18:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:50.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.868 --rc genhtml_branch_coverage=1 00:03:50.868 --rc genhtml_function_coverage=1 00:03:50.868 --rc genhtml_legend=1 00:03:50.868 --rc geninfo_all_blocks=1 00:03:50.868 --rc geninfo_unexecuted_blocks=1 00:03:50.868 00:03:50.868 ' 00:03:50.868 05:18:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:50.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.868 --rc genhtml_branch_coverage=1 00:03:50.868 --rc genhtml_function_coverage=1 00:03:50.868 --rc genhtml_legend=1 00:03:50.868 --rc geninfo_all_blocks=1 00:03:50.868 --rc geninfo_unexecuted_blocks=1 00:03:50.868 00:03:50.868 ' 00:03:50.868 05:18:53 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:50.868 05:18:53 -- setup/devices.sh@192 -- # setup reset 00:03:50.868 05:18:53 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:50.868 05:18:53 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:55.076 05:18:58 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:55.076 05:18:58 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:55.076 05:18:58 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:55.076 05:18:58 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:55.076 05:18:58 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:55.076 05:18:58 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:55.076 05:18:58 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:55.076 05:18:58 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:55.076 05:18:58 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:55.076 05:18:58 -- setup/devices.sh@196 -- # blocks=() 00:03:55.076 05:18:58 -- setup/devices.sh@196 -- # declare -a blocks 00:03:55.076 05:18:58 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:55.076 05:18:58 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:55.076 05:18:58 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:55.076 05:18:58 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:55.076 05:18:58 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:55.076 05:18:58 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:55.076 05:18:58 -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:03:55.076 05:18:58 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:55.076 05:18:58 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:55.076 05:18:58 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:03:55.076 05:18:58 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:55.076 No valid GPT data, bailing 00:03:55.076 05:18:58 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:55.076 05:18:58 -- scripts/common.sh@393 -- # pt= 00:03:55.076 05:18:58 -- scripts/common.sh@394 -- # return 1 00:03:55.076 05:18:58 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:55.076 05:18:58 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:55.076 05:18:58 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:55.076 05:18:58 -- setup/common.sh@80 -- # echo 1920383410176 00:03:55.076 05:18:58 -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:03:55.076 05:18:58 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:55.076 05:18:58 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:03:55.076 05:18:58 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:55.076 05:18:58 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:55.076 05:18:58 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:55.076 05:18:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:55.076 05:18:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:55.076 05:18:58 -- common/autotest_common.sh@10 -- # set +x 00:03:55.076 ************************************ 00:03:55.076 START TEST nvme_mount 00:03:55.076 ************************************ 00:03:55.076 05:18:58 -- common/autotest_common.sh@1114 -- # nvme_mount 00:03:55.076 05:18:58 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:55.076 05:18:58 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:55.076 05:18:58 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.076 05:18:58 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:55.076 05:18:58 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:55.076 05:18:58 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:55.076 05:18:58 -- setup/common.sh@40 -- # local part_no=1 00:03:55.076 05:18:58 -- setup/common.sh@41 -- # local size=1073741824 00:03:55.076 05:18:58 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:55.076 05:18:58 -- setup/common.sh@44 -- # parts=() 00:03:55.076 05:18:58 -- setup/common.sh@44 -- # local parts 00:03:55.076 05:18:58 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:55.076 05:18:58 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:55.076 05:18:58 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:55.076 05:18:58 -- setup/common.sh@46 -- # (( part++ )) 00:03:55.076 05:18:58 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:55.076 05:18:58 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:55.076 05:18:58 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:55.076 05:18:58 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:56.463 Creating new GPT entries in memory. 00:03:56.463 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:56.463 other utilities. 00:03:56.463 05:18:59 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:56.463 05:18:59 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:56.463 05:18:59 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:56.463 05:18:59 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:56.463 05:18:59 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:57.113 Creating new GPT entries in memory. 00:03:57.113 The operation has completed successfully. 00:03:57.113 05:19:00 -- setup/common.sh@57 -- # (( part++ )) 00:03:57.113 05:19:00 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:57.113 05:19:00 -- setup/common.sh@62 -- # wait 1581342 00:03:57.415 05:19:00 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:57.415 05:19:00 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:57.415 05:19:00 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:57.415 05:19:00 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:57.415 05:19:00 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:57.415 05:19:00 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:57.415 05:19:00 -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:57.415 05:19:00 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:03:57.415 05:19:00 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:57.415 05:19:00 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:57.415 05:19:00 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:57.415 05:19:00 -- setup/devices.sh@53 -- # local found=0 00:03:57.415 05:19:00 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:57.415 05:19:00 -- setup/devices.sh@56 -- # : 00:03:57.415 05:19:00 -- setup/devices.sh@59 -- # local pci status 00:03:57.415 05:19:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.415 05:19:00 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:03:57.415 05:19:00 -- setup/devices.sh@47 -- # setup output config 00:03:57.415 05:19:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.415 05:19:00 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:00.718 05:19:03 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:00.718 05:19:03 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:00.718 05:19:03 -- setup/devices.sh@63 -- # found=1 00:04:00.719 05:19:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.719 05:19:03 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:00.719 05:19:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.719 05:19:03 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:00.719 05:19:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.719 05:19:03 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:00.719 05:19:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.719 05:19:03 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:00.719 05:19:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.719 05:19:03 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:00.719 05:19:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.719 05:19:03 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:00.719 05:19:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.719 05:19:03 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:00.719 05:19:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.719 05:19:03 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:00.719 05:19:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.719 05:19:03 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:00.719 05:19:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.719 05:19:03 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:00.719 05:19:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.719 05:19:03 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:00.719 05:19:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.978 05:19:03 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:00.978 05:19:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.978 05:19:03 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:00.978 05:19:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.978 05:19:03 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:00.978 05:19:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.978 05:19:03 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:00.978 05:19:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.978 05:19:03 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:00.978 05:19:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.238 05:19:04 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:01.238 05:19:04 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:01.238 05:19:04 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.238 05:19:04 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:01.238 05:19:04 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:01.238 05:19:04 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:01.239 05:19:04 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.239 05:19:04 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.239 05:19:04 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:01.239 05:19:04 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:01.239 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:01.239 05:19:04 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:01.239 05:19:04 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:01.498 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:01.498 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:01.498 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:01.498 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:01.498 05:19:04 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:01.498 05:19:04 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:01.498 05:19:04 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.498 05:19:04 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:01.498 05:19:04 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:01.498 05:19:04 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.759 05:19:04 -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:01.759 05:19:04 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:01.759 05:19:04 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:01.759 05:19:04 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.759 05:19:04 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:01.759 05:19:04 -- setup/devices.sh@53 -- # local found=0 00:04:01.759 05:19:04 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:01.759 05:19:04 -- setup/devices.sh@56 -- # : 00:04:01.759 05:19:04 -- setup/devices.sh@59 -- # local pci status 00:04:01.759 05:19:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.759 05:19:04 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:01.759 05:19:04 -- setup/devices.sh@47 -- # setup output config 00:04:01.759 05:19:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.759 05:19:04 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:05.057 05:19:08 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.057 05:19:08 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:05.057 05:19:08 -- setup/devices.sh@63 -- # found=1 00:04:05.057 05:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.057 05:19:08 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.057 05:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.057 05:19:08 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.057 05:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.057 05:19:08 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.057 05:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.057 05:19:08 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.057 05:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.057 05:19:08 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.057 05:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.057 05:19:08 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.057 05:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.057 05:19:08 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.057 05:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.057 05:19:08 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.057 05:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.057 05:19:08 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.057 05:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.057 05:19:08 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.057 05:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.057 05:19:08 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.057 05:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.057 05:19:08 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.057 05:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.057 05:19:08 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.057 05:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.057 05:19:08 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.057 05:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.057 05:19:08 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.057 05:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.057 05:19:08 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.057 05:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.629 05:19:08 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:05.629 05:19:08 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:05.629 05:19:08 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.629 05:19:08 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:05.629 05:19:08 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:05.629 05:19:08 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.629 05:19:08 -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:05.629 05:19:08 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:05.629 05:19:08 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:05.629 05:19:08 -- setup/devices.sh@50 -- # local mount_point= 00:04:05.629 05:19:08 -- setup/devices.sh@51 -- # local test_file= 00:04:05.629 05:19:08 -- setup/devices.sh@53 -- # local found=0 00:04:05.629 05:19:08 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:05.629 05:19:08 -- setup/devices.sh@59 -- # local pci status 00:04:05.629 05:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.629 05:19:08 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:05.629 05:19:08 -- setup/devices.sh@47 -- # setup output config 00:04:05.629 05:19:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.629 05:19:08 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:09.848 05:19:12 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.848 05:19:12 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:09.848 05:19:12 -- setup/devices.sh@63 -- # found=1 00:04:09.848 05:19:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.848 05:19:12 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.848 05:19:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.848 05:19:12 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.848 05:19:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.848 05:19:12 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.848 05:19:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.848 05:19:12 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.848 05:19:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.848 05:19:12 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.848 05:19:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.848 05:19:12 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.848 05:19:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.848 05:19:12 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.848 05:19:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.848 05:19:12 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.848 05:19:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.848 05:19:12 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.848 05:19:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.848 05:19:12 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.848 05:19:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.848 05:19:12 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.848 05:19:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.848 05:19:12 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.848 05:19:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.848 05:19:12 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.848 05:19:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.848 05:19:12 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.848 05:19:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.848 05:19:12 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.848 05:19:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.848 05:19:12 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.848 05:19:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.848 05:19:12 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:09.848 05:19:12 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:09.848 05:19:12 -- setup/devices.sh@68 -- # return 0 00:04:09.848 05:19:12 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:09.848 05:19:12 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:09.848 05:19:12 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:09.848 05:19:12 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:09.848 05:19:12 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:09.848 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:09.848 00:04:09.848 real 0m14.437s 00:04:09.848 user 0m4.405s 00:04:09.848 sys 0m7.957s 00:04:09.848 05:19:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:09.848 05:19:12 -- common/autotest_common.sh@10 -- # set +x 00:04:09.848 ************************************ 00:04:09.848 END TEST nvme_mount 00:04:09.848 ************************************ 00:04:09.848 05:19:12 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:09.848 05:19:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:09.848 05:19:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:09.848 05:19:12 -- common/autotest_common.sh@10 -- # set +x 00:04:09.848 ************************************ 00:04:09.848 START TEST dm_mount 00:04:09.848 ************************************ 00:04:09.848 05:19:12 -- common/autotest_common.sh@1114 -- # dm_mount 00:04:09.848 05:19:12 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:09.848 05:19:12 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:09.848 05:19:12 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:09.848 05:19:12 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:09.848 05:19:12 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:09.848 05:19:12 -- setup/common.sh@40 -- # local part_no=2 00:04:09.848 05:19:12 -- setup/common.sh@41 -- # local size=1073741824 00:04:09.848 05:19:12 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:09.848 05:19:12 -- setup/common.sh@44 -- # parts=() 00:04:09.848 05:19:12 -- setup/common.sh@44 -- # local parts 00:04:09.848 05:19:12 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:09.848 05:19:12 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:09.848 05:19:12 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:09.848 05:19:12 -- setup/common.sh@46 -- # (( part++ )) 00:04:09.848 05:19:12 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:09.848 05:19:12 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:09.848 05:19:12 -- setup/common.sh@46 -- # (( part++ )) 00:04:09.848 05:19:12 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:09.848 05:19:12 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:09.848 05:19:12 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:09.848 05:19:12 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:10.792 Creating new GPT entries in memory. 00:04:10.792 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:10.792 other utilities. 00:04:10.792 05:19:13 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:10.792 05:19:13 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:10.792 05:19:13 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:10.792 05:19:13 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:10.792 05:19:13 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:11.734 Creating new GPT entries in memory. 00:04:11.734 The operation has completed successfully. 00:04:11.734 05:19:14 -- setup/common.sh@57 -- # (( part++ )) 00:04:11.734 05:19:14 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:11.734 05:19:14 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:11.734 05:19:14 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:11.734 05:19:14 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:12.679 The operation has completed successfully. 00:04:12.679 05:19:15 -- setup/common.sh@57 -- # (( part++ )) 00:04:12.679 05:19:15 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:12.679 05:19:15 -- setup/common.sh@62 -- # wait 1586918 00:04:12.679 05:19:15 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:12.679 05:19:15 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:12.679 05:19:15 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:12.679 05:19:15 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:12.937 05:19:15 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:12.937 05:19:15 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:12.937 05:19:15 -- setup/devices.sh@161 -- # break 00:04:12.937 05:19:15 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:12.937 05:19:15 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:12.937 05:19:15 -- setup/devices.sh@165 -- # dm=/dev/dm-1 00:04:12.937 05:19:15 -- setup/devices.sh@166 -- # dm=dm-1 00:04:12.937 05:19:15 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-1 ]] 00:04:12.937 05:19:15 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-1 ]] 00:04:12.937 05:19:15 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:12.937 05:19:15 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:12.937 05:19:15 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:12.937 05:19:15 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:12.937 05:19:15 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:12.937 05:19:15 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:12.937 05:19:15 -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:12.937 05:19:15 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:12.937 05:19:15 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:12.937 05:19:15 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:12.937 05:19:15 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:12.937 05:19:15 -- setup/devices.sh@53 -- # local found=0 00:04:12.937 05:19:15 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:12.938 05:19:15 -- setup/devices.sh@56 -- # : 00:04:12.938 05:19:15 -- setup/devices.sh@59 -- # local pci status 00:04:12.938 05:19:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.938 05:19:15 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:12.938 05:19:15 -- setup/devices.sh@47 -- # setup output config 00:04:12.938 05:19:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.938 05:19:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:16.233 05:19:19 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:16.233 05:19:19 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:16.233 05:19:19 -- setup/devices.sh@63 -- # found=1 00:04:16.233 05:19:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.233 05:19:19 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:16.233 05:19:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.233 05:19:19 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:16.233 05:19:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.233 05:19:19 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:16.233 05:19:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.233 05:19:19 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:16.233 05:19:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.233 05:19:19 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:16.233 05:19:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.233 05:19:19 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:16.233 05:19:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.233 05:19:19 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:16.233 05:19:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.233 05:19:19 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:16.233 05:19:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.493 05:19:19 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:16.494 05:19:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.494 05:19:19 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:16.494 05:19:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.494 05:19:19 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:16.494 05:19:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.494 05:19:19 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:16.494 05:19:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.494 05:19:19 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:16.494 05:19:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.494 05:19:19 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:16.494 05:19:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.494 05:19:19 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:16.494 05:19:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.494 05:19:19 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:16.494 05:19:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.754 05:19:19 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:16.754 05:19:19 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:16.754 05:19:19 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:16.754 05:19:19 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:16.754 05:19:19 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:16.754 05:19:19 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:16.754 05:19:19 -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 '' '' 00:04:16.754 05:19:19 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:16.755 05:19:19 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 00:04:16.755 05:19:19 -- setup/devices.sh@50 -- # local mount_point= 00:04:16.755 05:19:19 -- setup/devices.sh@51 -- # local test_file= 00:04:16.755 05:19:19 -- setup/devices.sh@53 -- # local found=0 00:04:16.755 05:19:19 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:16.755 05:19:19 -- setup/devices.sh@59 -- # local pci status 00:04:16.755 05:19:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.755 05:19:19 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:16.755 05:19:19 -- setup/devices.sh@47 -- # setup output config 00:04:16.755 05:19:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.755 05:19:19 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:20.955 05:19:23 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:20.955 05:19:23 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\1\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\1* ]] 00:04:20.955 05:19:23 -- setup/devices.sh@63 -- # found=1 00:04:20.955 05:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.955 05:19:23 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:20.955 05:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.955 05:19:23 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:20.955 05:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.955 05:19:23 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:20.955 05:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.955 05:19:23 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:20.955 05:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.955 05:19:23 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:20.955 05:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.955 05:19:23 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:20.955 05:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.955 05:19:23 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:20.955 05:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.955 05:19:23 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:20.955 05:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.955 05:19:23 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:20.955 05:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.955 05:19:23 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:20.955 05:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.955 05:19:23 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:20.955 05:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.955 05:19:23 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:20.955 05:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.955 05:19:23 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:20.955 05:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.955 05:19:23 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:20.955 05:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.956 05:19:23 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:20.956 05:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.956 05:19:23 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:20.956 05:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.956 05:19:23 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:20.956 05:19:23 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:20.956 05:19:23 -- setup/devices.sh@68 -- # return 0 00:04:20.956 05:19:23 -- setup/devices.sh@187 -- # cleanup_dm 00:04:20.956 05:19:23 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:20.956 05:19:23 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:20.956 05:19:23 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:20.956 05:19:23 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:20.956 05:19:23 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:20.956 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:20.956 05:19:23 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:20.956 05:19:23 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:20.956 00:04:20.956 real 0m11.174s 00:04:20.956 user 0m3.090s 00:04:20.956 sys 0m5.162s 00:04:20.956 05:19:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:20.956 05:19:23 -- common/autotest_common.sh@10 -- # set +x 00:04:20.956 ************************************ 00:04:20.956 END TEST dm_mount 00:04:20.956 ************************************ 00:04:20.956 05:19:23 -- setup/devices.sh@1 -- # cleanup 00:04:20.956 05:19:23 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:20.956 05:19:23 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.956 05:19:23 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:20.956 05:19:23 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:20.956 05:19:23 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:20.956 05:19:23 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:21.217 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:21.217 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:21.217 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:21.217 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:21.217 05:19:24 -- setup/devices.sh@12 -- # cleanup_dm 00:04:21.217 05:19:24 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:21.217 05:19:24 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:21.217 05:19:24 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:21.217 05:19:24 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:21.217 05:19:24 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:21.217 05:19:24 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:21.217 00:04:21.217 real 0m30.528s 00:04:21.217 user 0m9.203s 00:04:21.217 sys 0m16.228s 00:04:21.217 05:19:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:21.217 05:19:24 -- common/autotest_common.sh@10 -- # set +x 00:04:21.217 ************************************ 00:04:21.217 END TEST devices 00:04:21.217 ************************************ 00:04:21.217 00:04:21.217 real 1m43.632s 00:04:21.217 user 0m34.917s 00:04:21.217 sys 1m0.307s 00:04:21.217 05:19:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:21.217 05:19:24 -- common/autotest_common.sh@10 -- # set +x 00:04:21.217 ************************************ 00:04:21.217 END TEST setup.sh 00:04:21.217 ************************************ 00:04:21.217 05:19:24 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:24.516 Hugepages 00:04:24.516 node hugesize free / total 00:04:24.516 node0 1048576kB 0 / 0 00:04:24.516 node0 2048kB 2048 / 2048 00:04:24.516 node1 1048576kB 0 / 0 00:04:24.516 node1 2048kB 0 / 0 00:04:24.516 00:04:24.516 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:24.516 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:24.516 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:24.516 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:24.516 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:24.516 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:24.777 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:24.777 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:24.777 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:24.777 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:24.777 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:24.777 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:24.777 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:24.777 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:24.777 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:24.777 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:24.777 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:24.777 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:24.777 05:19:27 -- spdk/autotest.sh@128 -- # uname -s 00:04:24.777 05:19:27 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:04:24.777 05:19:27 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:04:24.777 05:19:27 -- common/autotest_common.sh@1526 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:28.982 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:28.982 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:28.982 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:28.982 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:28.982 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:28.982 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:28.982 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:28.982 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:28.982 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:28.982 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:28.982 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:28.982 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:28.982 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:28.982 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:28.982 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:28.982 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:30.366 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:30.627 05:19:33 -- common/autotest_common.sh@1527 -- # sleep 1 00:04:31.568 05:19:34 -- common/autotest_common.sh@1528 -- # bdfs=() 00:04:31.568 05:19:34 -- common/autotest_common.sh@1528 -- # local bdfs 00:04:31.568 05:19:34 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:04:31.568 05:19:34 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:04:31.568 05:19:34 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:31.568 05:19:34 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:31.568 05:19:34 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:31.568 05:19:34 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:31.568 05:19:34 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:31.828 05:19:34 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:04:31.828 05:19:34 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:65:00.0 00:04:31.828 05:19:34 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:35.142 Waiting for block devices as requested 00:04:35.401 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:35.401 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:35.401 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:35.661 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:35.661 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:35.661 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:35.921 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:35.921 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:35.921 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:36.181 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:36.181 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:36.441 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:36.441 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:36.441 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:36.701 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:36.701 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:36.701 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:36.962 05:19:40 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:36.962 05:19:40 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:36.962 05:19:40 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 00:04:36.962 05:19:40 -- common/autotest_common.sh@1497 -- # grep 0000:65:00.0/nvme/nvme 00:04:36.962 05:19:40 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:36.962 05:19:40 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:36.962 05:19:40 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:36.962 05:19:40 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:04:36.962 05:19:40 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:04:36.962 05:19:40 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:04:36.962 05:19:40 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:36.962 05:19:40 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:36.962 05:19:40 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:36.962 05:19:40 -- common/autotest_common.sh@1540 -- # oacs=' 0x5f' 00:04:36.962 05:19:40 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:36.962 05:19:40 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:36.962 05:19:40 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:04:36.962 05:19:40 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:36.962 05:19:40 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:37.222 05:19:40 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:37.222 05:19:40 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:37.223 05:19:40 -- common/autotest_common.sh@1552 -- # continue 00:04:37.223 05:19:40 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:04:37.223 05:19:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:37.223 05:19:40 -- common/autotest_common.sh@10 -- # set +x 00:04:37.223 05:19:40 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:04:37.223 05:19:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:37.223 05:19:40 -- common/autotest_common.sh@10 -- # set +x 00:04:37.223 05:19:40 -- spdk/autotest.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:41.440 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:41.440 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:41.440 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:41.440 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:41.440 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:41.440 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:41.440 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:41.440 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:41.440 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:41.440 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:41.440 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:41.440 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:41.440 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:41.440 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:41.440 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:41.440 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:41.440 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:41.440 05:19:44 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:04:41.440 05:19:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:41.440 05:19:44 -- common/autotest_common.sh@10 -- # set +x 00:04:41.440 05:19:44 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:04:41.440 05:19:44 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:04:41.440 05:19:44 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:04:41.440 05:19:44 -- common/autotest_common.sh@1572 -- # bdfs=() 00:04:41.440 05:19:44 -- common/autotest_common.sh@1572 -- # local bdfs 00:04:41.440 05:19:44 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:04:41.440 05:19:44 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:41.441 05:19:44 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:41.441 05:19:44 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:41.441 05:19:44 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:41.441 05:19:44 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:41.441 05:19:44 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:04:41.441 05:19:44 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:65:00.0 00:04:41.441 05:19:44 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:41.441 05:19:44 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:41.441 05:19:44 -- common/autotest_common.sh@1575 -- # device=0xa80a 00:04:41.441 05:19:44 -- common/autotest_common.sh@1576 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:41.441 05:19:44 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:04:41.441 05:19:44 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:04:41.441 05:19:44 -- common/autotest_common.sh@1588 -- # return 0 00:04:41.441 05:19:44 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:04:41.441 05:19:44 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:04:41.441 05:19:44 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:41.441 05:19:44 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:41.441 05:19:44 -- spdk/autotest.sh@160 -- # timing_enter lib 00:04:41.441 05:19:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:41.441 05:19:44 -- common/autotest_common.sh@10 -- # set +x 00:04:41.441 05:19:44 -- spdk/autotest.sh@162 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:41.441 05:19:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.441 05:19:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.441 05:19:44 -- common/autotest_common.sh@10 -- # set +x 00:04:41.441 ************************************ 00:04:41.441 START TEST env 00:04:41.441 ************************************ 00:04:41.441 05:19:44 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:41.730 * Looking for test storage... 00:04:41.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:41.730 05:19:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:41.730 05:19:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:41.730 05:19:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:41.730 05:19:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:41.730 05:19:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:41.730 05:19:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:41.730 05:19:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:41.730 05:19:44 -- scripts/common.sh@335 -- # IFS=.-: 00:04:41.730 05:19:44 -- scripts/common.sh@335 -- # read -ra ver1 00:04:41.730 05:19:44 -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.730 05:19:44 -- scripts/common.sh@336 -- # read -ra ver2 00:04:41.730 05:19:44 -- scripts/common.sh@337 -- # local 'op=<' 00:04:41.730 05:19:44 -- scripts/common.sh@339 -- # ver1_l=2 00:04:41.730 05:19:44 -- scripts/common.sh@340 -- # ver2_l=1 00:04:41.730 05:19:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:41.730 05:19:44 -- scripts/common.sh@343 -- # case "$op" in 00:04:41.730 05:19:44 -- scripts/common.sh@344 -- # : 1 00:04:41.730 05:19:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:41.731 05:19:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.731 05:19:44 -- scripts/common.sh@364 -- # decimal 1 00:04:41.731 05:19:44 -- scripts/common.sh@352 -- # local d=1 00:04:41.731 05:19:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.731 05:19:44 -- scripts/common.sh@354 -- # echo 1 00:04:41.731 05:19:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:41.731 05:19:44 -- scripts/common.sh@365 -- # decimal 2 00:04:41.731 05:19:44 -- scripts/common.sh@352 -- # local d=2 00:04:41.731 05:19:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.731 05:19:44 -- scripts/common.sh@354 -- # echo 2 00:04:41.731 05:19:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:41.731 05:19:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:41.731 05:19:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:41.731 05:19:44 -- scripts/common.sh@367 -- # return 0 00:04:41.731 05:19:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.731 05:19:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:41.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.731 --rc genhtml_branch_coverage=1 00:04:41.731 --rc genhtml_function_coverage=1 00:04:41.731 --rc genhtml_legend=1 00:04:41.731 --rc geninfo_all_blocks=1 00:04:41.731 --rc geninfo_unexecuted_blocks=1 00:04:41.731 00:04:41.731 ' 00:04:41.731 05:19:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:41.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.731 --rc genhtml_branch_coverage=1 00:04:41.731 --rc genhtml_function_coverage=1 00:04:41.731 --rc genhtml_legend=1 00:04:41.731 --rc geninfo_all_blocks=1 00:04:41.731 --rc geninfo_unexecuted_blocks=1 00:04:41.731 00:04:41.731 ' 00:04:41.731 05:19:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:41.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.731 --rc genhtml_branch_coverage=1 00:04:41.731 --rc genhtml_function_coverage=1 00:04:41.731 --rc genhtml_legend=1 00:04:41.731 --rc geninfo_all_blocks=1 00:04:41.731 --rc geninfo_unexecuted_blocks=1 00:04:41.731 00:04:41.731 ' 00:04:41.731 05:19:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:41.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.731 --rc genhtml_branch_coverage=1 00:04:41.731 --rc genhtml_function_coverage=1 00:04:41.731 --rc genhtml_legend=1 00:04:41.731 --rc geninfo_all_blocks=1 00:04:41.731 --rc geninfo_unexecuted_blocks=1 00:04:41.731 00:04:41.731 ' 00:04:41.731 05:19:44 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:41.731 05:19:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.731 05:19:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.731 05:19:44 -- common/autotest_common.sh@10 -- # set +x 00:04:41.731 ************************************ 00:04:41.731 START TEST env_memory 00:04:41.731 ************************************ 00:04:41.731 05:19:44 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:41.731 00:04:41.731 00:04:41.731 CUnit - A unit testing framework for C - Version 2.1-3 00:04:41.731 http://cunit.sourceforge.net/ 00:04:41.731 00:04:41.731 00:04:41.731 Suite: memory 00:04:41.731 Test: alloc and free memory map ...[2024-12-07 05:19:44.855021] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:41.731 passed 00:04:41.731 Test: mem map translation ...[2024-12-07 05:19:44.883182] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:41.731 [2024-12-07 05:19:44.883215] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:41.731 [2024-12-07 05:19:44.883262] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:41.731 [2024-12-07 05:19:44.883269] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:41.731 passed 00:04:41.731 Test: mem map registration ...[2024-12-07 05:19:44.943320] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:41.731 [2024-12-07 05:19:44.943344] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:42.041 passed 00:04:42.041 Test: mem map adjacent registrations ...passed 00:04:42.041 00:04:42.041 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.041 suites 1 1 n/a 0 0 00:04:42.041 tests 4 4 4 0 0 00:04:42.041 asserts 152 152 152 0 n/a 00:04:42.041 00:04:42.041 Elapsed time = 0.203 seconds 00:04:42.041 00:04:42.041 real 0m0.217s 00:04:42.041 user 0m0.206s 00:04:42.041 sys 0m0.010s 00:04:42.041 05:19:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:42.041 05:19:45 -- common/autotest_common.sh@10 -- # set +x 00:04:42.041 ************************************ 00:04:42.041 END TEST env_memory 00:04:42.041 ************************************ 00:04:42.041 05:19:45 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:42.041 05:19:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:42.041 05:19:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:42.041 05:19:45 -- common/autotest_common.sh@10 -- # set +x 00:04:42.041 ************************************ 00:04:42.041 START TEST env_vtophys 00:04:42.041 ************************************ 00:04:42.041 05:19:45 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:42.041 EAL: lib.eal log level changed from notice to debug 00:04:42.041 EAL: Detected lcore 0 as core 0 on socket 0 00:04:42.041 EAL: Detected lcore 1 as core 1 on socket 0 00:04:42.041 EAL: Detected lcore 2 as core 2 on socket 0 00:04:42.041 EAL: Detected lcore 3 as core 3 on socket 0 00:04:42.041 EAL: Detected lcore 4 as core 4 on socket 0 00:04:42.041 EAL: Detected lcore 5 as core 5 on socket 0 00:04:42.041 EAL: Detected lcore 6 as core 6 on socket 0 00:04:42.041 EAL: Detected lcore 7 as core 7 on socket 0 00:04:42.041 EAL: Detected lcore 8 as core 8 on socket 0 00:04:42.041 EAL: Detected lcore 9 as core 9 on socket 0 00:04:42.041 EAL: Detected lcore 10 as core 10 on socket 0 00:04:42.041 EAL: Detected lcore 11 as core 11 on socket 0 00:04:42.042 EAL: Detected lcore 12 as core 12 on socket 0 00:04:42.042 EAL: Detected lcore 13 as core 13 on socket 0 00:04:42.042 EAL: Detected lcore 14 as core 14 on socket 0 00:04:42.042 EAL: Detected lcore 15 as core 15 on socket 0 00:04:42.042 EAL: Detected lcore 16 as core 16 on socket 0 00:04:42.042 EAL: Detected lcore 17 as core 17 on socket 0 00:04:42.042 EAL: Detected lcore 18 as core 18 on socket 0 00:04:42.042 EAL: Detected lcore 19 as core 19 on socket 0 00:04:42.042 EAL: Detected lcore 20 as core 20 on socket 0 00:04:42.042 EAL: Detected lcore 21 as core 21 on socket 0 00:04:42.042 EAL: Detected lcore 22 as core 22 on socket 0 00:04:42.042 EAL: Detected lcore 23 as core 23 on socket 0 00:04:42.042 EAL: Detected lcore 24 as core 24 on socket 0 00:04:42.042 EAL: Detected lcore 25 as core 25 on socket 0 00:04:42.042 EAL: Detected lcore 26 as core 26 on socket 0 00:04:42.042 EAL: Detected lcore 27 as core 27 on socket 0 00:04:42.042 EAL: Detected lcore 28 as core 28 on socket 0 00:04:42.042 EAL: Detected lcore 29 as core 29 on socket 0 00:04:42.042 EAL: Detected lcore 30 as core 30 on socket 0 00:04:42.042 EAL: Detected lcore 31 as core 31 on socket 0 00:04:42.042 EAL: Detected lcore 32 as core 32 on socket 0 00:04:42.042 EAL: Detected lcore 33 as core 33 on socket 0 00:04:42.042 EAL: Detected lcore 34 as core 34 on socket 0 00:04:42.042 EAL: Detected lcore 35 as core 35 on socket 0 00:04:42.042 EAL: Detected lcore 36 as core 0 on socket 1 00:04:42.042 EAL: Detected lcore 37 as core 1 on socket 1 00:04:42.042 EAL: Detected lcore 38 as core 2 on socket 1 00:04:42.042 EAL: Detected lcore 39 as core 3 on socket 1 00:04:42.042 EAL: Detected lcore 40 as core 4 on socket 1 00:04:42.042 EAL: Detected lcore 41 as core 5 on socket 1 00:04:42.042 EAL: Detected lcore 42 as core 6 on socket 1 00:04:42.042 EAL: Detected lcore 43 as core 7 on socket 1 00:04:42.042 EAL: Detected lcore 44 as core 8 on socket 1 00:04:42.042 EAL: Detected lcore 45 as core 9 on socket 1 00:04:42.042 EAL: Detected lcore 46 as core 10 on socket 1 00:04:42.042 EAL: Detected lcore 47 as core 11 on socket 1 00:04:42.042 EAL: Detected lcore 48 as core 12 on socket 1 00:04:42.042 EAL: Detected lcore 49 as core 13 on socket 1 00:04:42.042 EAL: Detected lcore 50 as core 14 on socket 1 00:04:42.042 EAL: Detected lcore 51 as core 15 on socket 1 00:04:42.042 EAL: Detected lcore 52 as core 16 on socket 1 00:04:42.042 EAL: Detected lcore 53 as core 17 on socket 1 00:04:42.042 EAL: Detected lcore 54 as core 18 on socket 1 00:04:42.042 EAL: Detected lcore 55 as core 19 on socket 1 00:04:42.042 EAL: Detected lcore 56 as core 20 on socket 1 00:04:42.042 EAL: Detected lcore 57 as core 21 on socket 1 00:04:42.042 EAL: Detected lcore 58 as core 22 on socket 1 00:04:42.042 EAL: Detected lcore 59 as core 23 on socket 1 00:04:42.042 EAL: Detected lcore 60 as core 24 on socket 1 00:04:42.042 EAL: Detected lcore 61 as core 25 on socket 1 00:04:42.042 EAL: Detected lcore 62 as core 26 on socket 1 00:04:42.042 EAL: Detected lcore 63 as core 27 on socket 1 00:04:42.042 EAL: Detected lcore 64 as core 28 on socket 1 00:04:42.042 EAL: Detected lcore 65 as core 29 on socket 1 00:04:42.042 EAL: Detected lcore 66 as core 30 on socket 1 00:04:42.042 EAL: Detected lcore 67 as core 31 on socket 1 00:04:42.042 EAL: Detected lcore 68 as core 32 on socket 1 00:04:42.042 EAL: Detected lcore 69 as core 33 on socket 1 00:04:42.042 EAL: Detected lcore 70 as core 34 on socket 1 00:04:42.042 EAL: Detected lcore 71 as core 35 on socket 1 00:04:42.042 EAL: Detected lcore 72 as core 0 on socket 0 00:04:42.042 EAL: Detected lcore 73 as core 1 on socket 0 00:04:42.042 EAL: Detected lcore 74 as core 2 on socket 0 00:04:42.042 EAL: Detected lcore 75 as core 3 on socket 0 00:04:42.042 EAL: Detected lcore 76 as core 4 on socket 0 00:04:42.042 EAL: Detected lcore 77 as core 5 on socket 0 00:04:42.042 EAL: Detected lcore 78 as core 6 on socket 0 00:04:42.042 EAL: Detected lcore 79 as core 7 on socket 0 00:04:42.042 EAL: Detected lcore 80 as core 8 on socket 0 00:04:42.042 EAL: Detected lcore 81 as core 9 on socket 0 00:04:42.042 EAL: Detected lcore 82 as core 10 on socket 0 00:04:42.042 EAL: Detected lcore 83 as core 11 on socket 0 00:04:42.042 EAL: Detected lcore 84 as core 12 on socket 0 00:04:42.042 EAL: Detected lcore 85 as core 13 on socket 0 00:04:42.042 EAL: Detected lcore 86 as core 14 on socket 0 00:04:42.042 EAL: Detected lcore 87 as core 15 on socket 0 00:04:42.042 EAL: Detected lcore 88 as core 16 on socket 0 00:04:42.042 EAL: Detected lcore 89 as core 17 on socket 0 00:04:42.042 EAL: Detected lcore 90 as core 18 on socket 0 00:04:42.042 EAL: Detected lcore 91 as core 19 on socket 0 00:04:42.042 EAL: Detected lcore 92 as core 20 on socket 0 00:04:42.042 EAL: Detected lcore 93 as core 21 on socket 0 00:04:42.042 EAL: Detected lcore 94 as core 22 on socket 0 00:04:42.042 EAL: Detected lcore 95 as core 23 on socket 0 00:04:42.042 EAL: Detected lcore 96 as core 24 on socket 0 00:04:42.042 EAL: Detected lcore 97 as core 25 on socket 0 00:04:42.042 EAL: Detected lcore 98 as core 26 on socket 0 00:04:42.042 EAL: Detected lcore 99 as core 27 on socket 0 00:04:42.042 EAL: Detected lcore 100 as core 28 on socket 0 00:04:42.042 EAL: Detected lcore 101 as core 29 on socket 0 00:04:42.042 EAL: Detected lcore 102 as core 30 on socket 0 00:04:42.042 EAL: Detected lcore 103 as core 31 on socket 0 00:04:42.042 EAL: Detected lcore 104 as core 32 on socket 0 00:04:42.042 EAL: Detected lcore 105 as core 33 on socket 0 00:04:42.042 EAL: Detected lcore 106 as core 34 on socket 0 00:04:42.042 EAL: Detected lcore 107 as core 35 on socket 0 00:04:42.042 EAL: Detected lcore 108 as core 0 on socket 1 00:04:42.042 EAL: Detected lcore 109 as core 1 on socket 1 00:04:42.042 EAL: Detected lcore 110 as core 2 on socket 1 00:04:42.042 EAL: Detected lcore 111 as core 3 on socket 1 00:04:42.042 EAL: Detected lcore 112 as core 4 on socket 1 00:04:42.042 EAL: Detected lcore 113 as core 5 on socket 1 00:04:42.042 EAL: Detected lcore 114 as core 6 on socket 1 00:04:42.042 EAL: Detected lcore 115 as core 7 on socket 1 00:04:42.042 EAL: Detected lcore 116 as core 8 on socket 1 00:04:42.042 EAL: Detected lcore 117 as core 9 on socket 1 00:04:42.042 EAL: Detected lcore 118 as core 10 on socket 1 00:04:42.042 EAL: Detected lcore 119 as core 11 on socket 1 00:04:42.042 EAL: Detected lcore 120 as core 12 on socket 1 00:04:42.042 EAL: Detected lcore 121 as core 13 on socket 1 00:04:42.042 EAL: Detected lcore 122 as core 14 on socket 1 00:04:42.042 EAL: Detected lcore 123 as core 15 on socket 1 00:04:42.042 EAL: Detected lcore 124 as core 16 on socket 1 00:04:42.042 EAL: Detected lcore 125 as core 17 on socket 1 00:04:42.042 EAL: Detected lcore 126 as core 18 on socket 1 00:04:42.042 EAL: Detected lcore 127 as core 19 on socket 1 00:04:42.042 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:42.042 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:42.042 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:42.042 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:42.042 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:42.042 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:42.042 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:42.042 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:42.042 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:42.042 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:42.042 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:42.042 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:42.042 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:42.042 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:42.042 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:42.042 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:42.043 EAL: Maximum logical cores by configuration: 128 00:04:42.043 EAL: Detected CPU lcores: 128 00:04:42.043 EAL: Detected NUMA nodes: 2 00:04:42.043 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:42.043 EAL: Detected shared linkage of DPDK 00:04:42.043 EAL: No shared files mode enabled, IPC will be disabled 00:04:42.043 EAL: Bus pci wants IOVA as 'DC' 00:04:42.043 EAL: Buses did not request a specific IOVA mode. 00:04:42.043 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:42.043 EAL: Selected IOVA mode 'VA' 00:04:42.043 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.043 EAL: Probing VFIO support... 00:04:42.043 EAL: IOMMU type 1 (Type 1) is supported 00:04:42.043 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:42.043 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:42.043 EAL: VFIO support initialized 00:04:42.043 EAL: Ask a virtual area of 0x2e000 bytes 00:04:42.043 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:42.043 EAL: Setting up physically contiguous memory... 00:04:42.043 EAL: Setting maximum number of open files to 524288 00:04:42.043 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:42.043 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:42.043 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:42.043 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.043 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:42.043 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.043 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.043 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:42.043 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:42.043 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.043 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:42.043 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.043 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.043 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:42.043 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:42.043 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.043 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:42.043 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.043 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.043 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:42.043 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:42.043 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.043 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:42.043 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.043 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.043 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:42.043 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:42.043 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:42.043 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.043 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:42.043 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:42.043 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.043 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:42.043 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:42.043 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.043 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:42.043 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:42.043 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.043 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:42.043 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:42.043 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.043 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:42.043 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:42.043 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.043 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:42.043 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:42.043 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.043 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:42.043 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:42.043 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.043 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:42.043 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:42.043 EAL: Hugepages will be freed exactly as allocated. 00:04:42.043 EAL: No shared files mode enabled, IPC is disabled 00:04:42.043 EAL: No shared files mode enabled, IPC is disabled 00:04:42.043 EAL: TSC frequency is ~2400000 KHz 00:04:42.043 EAL: Main lcore 0 is ready (tid=7f9d0c8d0a00;cpuset=[0]) 00:04:42.043 EAL: Trying to obtain current memory policy. 00:04:42.043 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.043 EAL: Restoring previous memory policy: 0 00:04:42.043 EAL: request: mp_malloc_sync 00:04:42.043 EAL: No shared files mode enabled, IPC is disabled 00:04:42.043 EAL: Heap on socket 0 was expanded by 2MB 00:04:42.043 EAL: No shared files mode enabled, IPC is disabled 00:04:42.043 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:42.043 EAL: Mem event callback 'spdk:(nil)' registered 00:04:42.043 00:04:42.043 00:04:42.043 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.043 http://cunit.sourceforge.net/ 00:04:42.043 00:04:42.043 00:04:42.043 Suite: components_suite 00:04:42.043 Test: vtophys_malloc_test ...passed 00:04:42.043 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:42.043 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.043 EAL: Restoring previous memory policy: 4 00:04:42.043 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.043 EAL: request: mp_malloc_sync 00:04:42.043 EAL: No shared files mode enabled, IPC is disabled 00:04:42.043 EAL: Heap on socket 0 was expanded by 4MB 00:04:42.043 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.043 EAL: request: mp_malloc_sync 00:04:42.043 EAL: No shared files mode enabled, IPC is disabled 00:04:42.043 EAL: Heap on socket 0 was shrunk by 4MB 00:04:42.043 EAL: Trying to obtain current memory policy. 00:04:42.043 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.043 EAL: Restoring previous memory policy: 4 00:04:42.043 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.043 EAL: request: mp_malloc_sync 00:04:42.043 EAL: No shared files mode enabled, IPC is disabled 00:04:42.043 EAL: Heap on socket 0 was expanded by 6MB 00:04:42.043 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.043 EAL: request: mp_malloc_sync 00:04:42.043 EAL: No shared files mode enabled, IPC is disabled 00:04:42.043 EAL: Heap on socket 0 was shrunk by 6MB 00:04:42.043 EAL: Trying to obtain current memory policy. 00:04:42.043 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.043 EAL: Restoring previous memory policy: 4 00:04:42.043 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.043 EAL: request: mp_malloc_sync 00:04:42.043 EAL: No shared files mode enabled, IPC is disabled 00:04:42.043 EAL: Heap on socket 0 was expanded by 10MB 00:04:42.043 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.043 EAL: request: mp_malloc_sync 00:04:42.043 EAL: No shared files mode enabled, IPC is disabled 00:04:42.044 EAL: Heap on socket 0 was shrunk by 10MB 00:04:42.044 EAL: Trying to obtain current memory policy. 00:04:42.044 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.044 EAL: Restoring previous memory policy: 4 00:04:42.044 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.044 EAL: request: mp_malloc_sync 00:04:42.044 EAL: No shared files mode enabled, IPC is disabled 00:04:42.044 EAL: Heap on socket 0 was expanded by 18MB 00:04:42.044 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.044 EAL: request: mp_malloc_sync 00:04:42.044 EAL: No shared files mode enabled, IPC is disabled 00:04:42.044 EAL: Heap on socket 0 was shrunk by 18MB 00:04:42.044 EAL: Trying to obtain current memory policy. 00:04:42.044 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.044 EAL: Restoring previous memory policy: 4 00:04:42.044 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.044 EAL: request: mp_malloc_sync 00:04:42.044 EAL: No shared files mode enabled, IPC is disabled 00:04:42.044 EAL: Heap on socket 0 was expanded by 34MB 00:04:42.044 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.044 EAL: request: mp_malloc_sync 00:04:42.044 EAL: No shared files mode enabled, IPC is disabled 00:04:42.044 EAL: Heap on socket 0 was shrunk by 34MB 00:04:42.044 EAL: Trying to obtain current memory policy. 00:04:42.044 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.044 EAL: Restoring previous memory policy: 4 00:04:42.044 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.044 EAL: request: mp_malloc_sync 00:04:42.044 EAL: No shared files mode enabled, IPC is disabled 00:04:42.044 EAL: Heap on socket 0 was expanded by 66MB 00:04:42.044 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.044 EAL: request: mp_malloc_sync 00:04:42.044 EAL: No shared files mode enabled, IPC is disabled 00:04:42.044 EAL: Heap on socket 0 was shrunk by 66MB 00:04:42.044 EAL: Trying to obtain current memory policy. 00:04:42.044 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.044 EAL: Restoring previous memory policy: 4 00:04:42.044 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.044 EAL: request: mp_malloc_sync 00:04:42.044 EAL: No shared files mode enabled, IPC is disabled 00:04:42.044 EAL: Heap on socket 0 was expanded by 130MB 00:04:42.044 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.044 EAL: request: mp_malloc_sync 00:04:42.044 EAL: No shared files mode enabled, IPC is disabled 00:04:42.044 EAL: Heap on socket 0 was shrunk by 130MB 00:04:42.044 EAL: Trying to obtain current memory policy. 00:04:42.044 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.306 EAL: Restoring previous memory policy: 4 00:04:42.306 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.306 EAL: request: mp_malloc_sync 00:04:42.306 EAL: No shared files mode enabled, IPC is disabled 00:04:42.306 EAL: Heap on socket 0 was expanded by 258MB 00:04:42.306 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.306 EAL: request: mp_malloc_sync 00:04:42.306 EAL: No shared files mode enabled, IPC is disabled 00:04:42.306 EAL: Heap on socket 0 was shrunk by 258MB 00:04:42.306 EAL: Trying to obtain current memory policy. 00:04:42.306 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.306 EAL: Restoring previous memory policy: 4 00:04:42.306 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.306 EAL: request: mp_malloc_sync 00:04:42.306 EAL: No shared files mode enabled, IPC is disabled 00:04:42.306 EAL: Heap on socket 0 was expanded by 514MB 00:04:42.306 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.306 EAL: request: mp_malloc_sync 00:04:42.306 EAL: No shared files mode enabled, IPC is disabled 00:04:42.306 EAL: Heap on socket 0 was shrunk by 514MB 00:04:42.306 EAL: Trying to obtain current memory policy. 00:04:42.306 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.565 EAL: Restoring previous memory policy: 4 00:04:42.565 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.565 EAL: request: mp_malloc_sync 00:04:42.565 EAL: No shared files mode enabled, IPC is disabled 00:04:42.565 EAL: Heap on socket 0 was expanded by 1026MB 00:04:42.565 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.824 EAL: request: mp_malloc_sync 00:04:42.824 EAL: No shared files mode enabled, IPC is disabled 00:04:42.825 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:42.825 passed 00:04:42.825 00:04:42.825 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.825 suites 1 1 n/a 0 0 00:04:42.825 tests 2 2 2 0 0 00:04:42.825 asserts 497 497 497 0 n/a 00:04:42.825 00:04:42.825 Elapsed time = 0.688 seconds 00:04:42.825 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.825 EAL: request: mp_malloc_sync 00:04:42.825 EAL: No shared files mode enabled, IPC is disabled 00:04:42.825 EAL: Heap on socket 0 was shrunk by 2MB 00:04:42.825 EAL: No shared files mode enabled, IPC is disabled 00:04:42.825 EAL: No shared files mode enabled, IPC is disabled 00:04:42.825 EAL: No shared files mode enabled, IPC is disabled 00:04:42.825 00:04:42.825 real 0m0.828s 00:04:42.825 user 0m0.427s 00:04:42.825 sys 0m0.375s 00:04:42.825 05:19:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:42.825 05:19:45 -- common/autotest_common.sh@10 -- # set +x 00:04:42.825 ************************************ 00:04:42.825 END TEST env_vtophys 00:04:42.825 ************************************ 00:04:42.825 05:19:45 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:42.825 05:19:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:42.825 05:19:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:42.825 05:19:45 -- common/autotest_common.sh@10 -- # set +x 00:04:42.825 ************************************ 00:04:42.825 START TEST env_pci 00:04:42.825 ************************************ 00:04:42.825 05:19:45 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:42.825 00:04:42.825 00:04:42.825 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.825 http://cunit.sourceforge.net/ 00:04:42.825 00:04:42.825 00:04:42.825 Suite: pci 00:04:42.825 Test: pci_hook ...[2024-12-07 05:19:45.953711] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1598713 has claimed it 00:04:42.825 EAL: Cannot find device (10000:00:01.0) 00:04:42.825 EAL: Failed to attach device on primary process 00:04:42.825 passed 00:04:42.825 00:04:42.825 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.825 suites 1 1 n/a 0 0 00:04:42.825 tests 1 1 1 0 0 00:04:42.825 asserts 25 25 25 0 n/a 00:04:42.825 00:04:42.825 Elapsed time = 0.030 seconds 00:04:42.825 00:04:42.825 real 0m0.051s 00:04:42.825 user 0m0.014s 00:04:42.825 sys 0m0.037s 00:04:42.825 05:19:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:42.825 05:19:45 -- common/autotest_common.sh@10 -- # set +x 00:04:42.825 ************************************ 00:04:42.825 END TEST env_pci 00:04:42.825 ************************************ 00:04:42.825 05:19:46 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:42.825 05:19:46 -- env/env.sh@15 -- # uname 00:04:42.825 05:19:46 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:42.825 05:19:46 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:42.825 05:19:46 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:42.825 05:19:46 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:42.825 05:19:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:42.825 05:19:46 -- common/autotest_common.sh@10 -- # set +x 00:04:42.825 ************************************ 00:04:42.825 START TEST env_dpdk_post_init 00:04:42.825 ************************************ 00:04:42.825 05:19:46 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:43.085 EAL: Detected CPU lcores: 128 00:04:43.085 EAL: Detected NUMA nodes: 2 00:04:43.085 EAL: Detected shared linkage of DPDK 00:04:43.085 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:43.085 EAL: Selected IOVA mode 'VA' 00:04:43.085 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.085 EAL: VFIO support initialized 00:04:43.085 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:43.085 EAL: Using IOMMU type 1 (Type 1) 00:04:43.085 EAL: Ignore mapping IO port bar(1) 00:04:43.365 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:43.365 EAL: Ignore mapping IO port bar(1) 00:04:43.627 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:43.627 EAL: Ignore mapping IO port bar(1) 00:04:43.627 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:43.887 EAL: Ignore mapping IO port bar(1) 00:04:43.887 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:44.148 EAL: Ignore mapping IO port bar(1) 00:04:44.148 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:44.409 EAL: Ignore mapping IO port bar(1) 00:04:44.409 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:44.670 EAL: Ignore mapping IO port bar(1) 00:04:44.670 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:44.670 EAL: Ignore mapping IO port bar(1) 00:04:44.930 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:45.192 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:45.192 EAL: Ignore mapping IO port bar(1) 00:04:45.192 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:45.452 EAL: Ignore mapping IO port bar(1) 00:04:45.452 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:45.713 EAL: Ignore mapping IO port bar(1) 00:04:45.713 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:45.974 EAL: Ignore mapping IO port bar(1) 00:04:45.974 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:46.236 EAL: Ignore mapping IO port bar(1) 00:04:46.236 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:46.236 EAL: Ignore mapping IO port bar(1) 00:04:46.498 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:46.498 EAL: Ignore mapping IO port bar(1) 00:04:46.759 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:46.759 EAL: Ignore mapping IO port bar(1) 00:04:46.759 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:47.019 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:47.019 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:47.019 Starting DPDK initialization... 00:04:47.019 Starting SPDK post initialization... 00:04:47.019 SPDK NVMe probe 00:04:47.019 Attaching to 0000:65:00.0 00:04:47.019 Attached to 0000:65:00.0 00:04:47.019 Cleaning up... 00:04:48.934 00:04:48.934 real 0m5.742s 00:04:48.934 user 0m0.187s 00:04:48.934 sys 0m0.107s 00:04:48.934 05:19:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:48.934 05:19:51 -- common/autotest_common.sh@10 -- # set +x 00:04:48.934 ************************************ 00:04:48.934 END TEST env_dpdk_post_init 00:04:48.934 ************************************ 00:04:48.934 05:19:51 -- env/env.sh@26 -- # uname 00:04:48.934 05:19:51 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:48.934 05:19:51 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:48.934 05:19:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:48.934 05:19:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.934 05:19:51 -- common/autotest_common.sh@10 -- # set +x 00:04:48.934 ************************************ 00:04:48.934 START TEST env_mem_callbacks 00:04:48.934 ************************************ 00:04:48.934 05:19:51 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:48.934 EAL: Detected CPU lcores: 128 00:04:48.934 EAL: Detected NUMA nodes: 2 00:04:48.934 EAL: Detected shared linkage of DPDK 00:04:48.934 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:48.934 EAL: Selected IOVA mode 'VA' 00:04:48.934 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.934 EAL: VFIO support initialized 00:04:48.934 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:48.934 00:04:48.934 00:04:48.934 CUnit - A unit testing framework for C - Version 2.1-3 00:04:48.934 http://cunit.sourceforge.net/ 00:04:48.934 00:04:48.934 00:04:48.934 Suite: memory 00:04:48.934 Test: test ... 00:04:48.934 register 0x200000200000 2097152 00:04:48.934 malloc 3145728 00:04:48.935 register 0x200000400000 4194304 00:04:48.935 buf 0x200000500000 len 3145728 PASSED 00:04:48.935 malloc 64 00:04:48.935 buf 0x2000004fff40 len 64 PASSED 00:04:48.935 malloc 4194304 00:04:48.935 register 0x200000800000 6291456 00:04:48.935 buf 0x200000a00000 len 4194304 PASSED 00:04:48.935 free 0x200000500000 3145728 00:04:48.935 free 0x2000004fff40 64 00:04:48.935 unregister 0x200000400000 4194304 PASSED 00:04:48.935 free 0x200000a00000 4194304 00:04:48.935 unregister 0x200000800000 6291456 PASSED 00:04:48.935 malloc 8388608 00:04:48.935 register 0x200000400000 10485760 00:04:48.935 buf 0x200000600000 len 8388608 PASSED 00:04:48.935 free 0x200000600000 8388608 00:04:48.935 unregister 0x200000400000 10485760 PASSED 00:04:48.935 passed 00:04:48.935 00:04:48.935 Run Summary: Type Total Ran Passed Failed Inactive 00:04:48.935 suites 1 1 n/a 0 0 00:04:48.935 tests 1 1 1 0 0 00:04:48.935 asserts 15 15 15 0 n/a 00:04:48.935 00:04:48.935 Elapsed time = 0.010 seconds 00:04:48.935 00:04:48.935 real 0m0.068s 00:04:48.935 user 0m0.022s 00:04:48.935 sys 0m0.046s 00:04:48.935 05:19:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:48.935 05:19:51 -- common/autotest_common.sh@10 -- # set +x 00:04:48.935 ************************************ 00:04:48.935 END TEST env_mem_callbacks 00:04:48.935 ************************************ 00:04:48.935 00:04:48.935 real 0m7.351s 00:04:48.935 user 0m1.046s 00:04:48.935 sys 0m0.878s 00:04:48.935 05:19:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:48.935 05:19:51 -- common/autotest_common.sh@10 -- # set +x 00:04:48.935 ************************************ 00:04:48.935 END TEST env 00:04:48.935 ************************************ 00:04:48.935 05:19:51 -- spdk/autotest.sh@163 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:48.935 05:19:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:48.935 05:19:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.935 05:19:51 -- common/autotest_common.sh@10 -- # set +x 00:04:48.935 ************************************ 00:04:48.935 START TEST rpc 00:04:48.935 ************************************ 00:04:48.935 05:19:51 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:48.935 * Looking for test storage... 00:04:48.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:48.935 05:19:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:48.935 05:19:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:48.935 05:19:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:48.935 05:19:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:48.935 05:19:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:48.935 05:19:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:48.935 05:19:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:48.935 05:19:52 -- scripts/common.sh@335 -- # IFS=.-: 00:04:48.935 05:19:52 -- scripts/common.sh@335 -- # read -ra ver1 00:04:48.935 05:19:52 -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.935 05:19:52 -- scripts/common.sh@336 -- # read -ra ver2 00:04:48.935 05:19:52 -- scripts/common.sh@337 -- # local 'op=<' 00:04:48.935 05:19:52 -- scripts/common.sh@339 -- # ver1_l=2 00:04:48.935 05:19:52 -- scripts/common.sh@340 -- # ver2_l=1 00:04:48.935 05:19:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:48.935 05:19:52 -- scripts/common.sh@343 -- # case "$op" in 00:04:48.935 05:19:52 -- scripts/common.sh@344 -- # : 1 00:04:48.935 05:19:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:48.935 05:19:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.195 05:19:52 -- scripts/common.sh@364 -- # decimal 1 00:04:49.195 05:19:52 -- scripts/common.sh@352 -- # local d=1 00:04:49.195 05:19:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.195 05:19:52 -- scripts/common.sh@354 -- # echo 1 00:04:49.195 05:19:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:49.195 05:19:52 -- scripts/common.sh@365 -- # decimal 2 00:04:49.195 05:19:52 -- scripts/common.sh@352 -- # local d=2 00:04:49.195 05:19:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.195 05:19:52 -- scripts/common.sh@354 -- # echo 2 00:04:49.195 05:19:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:49.195 05:19:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:49.195 05:19:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:49.195 05:19:52 -- scripts/common.sh@367 -- # return 0 00:04:49.195 05:19:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.195 05:19:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:49.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.195 --rc genhtml_branch_coverage=1 00:04:49.195 --rc genhtml_function_coverage=1 00:04:49.195 --rc genhtml_legend=1 00:04:49.195 --rc geninfo_all_blocks=1 00:04:49.195 --rc geninfo_unexecuted_blocks=1 00:04:49.195 00:04:49.195 ' 00:04:49.195 05:19:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:49.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.195 --rc genhtml_branch_coverage=1 00:04:49.195 --rc genhtml_function_coverage=1 00:04:49.195 --rc genhtml_legend=1 00:04:49.195 --rc geninfo_all_blocks=1 00:04:49.195 --rc geninfo_unexecuted_blocks=1 00:04:49.195 00:04:49.195 ' 00:04:49.195 05:19:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:49.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.195 --rc genhtml_branch_coverage=1 00:04:49.195 --rc genhtml_function_coverage=1 00:04:49.195 --rc genhtml_legend=1 00:04:49.195 --rc geninfo_all_blocks=1 00:04:49.195 --rc geninfo_unexecuted_blocks=1 00:04:49.195 00:04:49.195 ' 00:04:49.195 05:19:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:49.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.195 --rc genhtml_branch_coverage=1 00:04:49.195 --rc genhtml_function_coverage=1 00:04:49.195 --rc genhtml_legend=1 00:04:49.195 --rc geninfo_all_blocks=1 00:04:49.195 --rc geninfo_unexecuted_blocks=1 00:04:49.195 00:04:49.195 ' 00:04:49.195 05:19:52 -- rpc/rpc.sh@65 -- # spdk_pid=1600071 00:04:49.195 05:19:52 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.195 05:19:52 -- rpc/rpc.sh@67 -- # waitforlisten 1600071 00:04:49.195 05:19:52 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:49.196 05:19:52 -- common/autotest_common.sh@829 -- # '[' -z 1600071 ']' 00:04:49.196 05:19:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.196 05:19:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:49.196 05:19:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.196 05:19:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:49.196 05:19:52 -- common/autotest_common.sh@10 -- # set +x 00:04:49.196 [2024-12-07 05:19:52.242344] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:49.196 [2024-12-07 05:19:52.242416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1600071 ] 00:04:49.196 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.196 [2024-12-07 05:19:52.325781] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.196 [2024-12-07 05:19:52.417695] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:49.196 [2024-12-07 05:19:52.417862] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:49.196 [2024-12-07 05:19:52.417874] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1600071' to capture a snapshot of events at runtime. 00:04:49.196 [2024-12-07 05:19:52.417883] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1600071 for offline analysis/debug. 00:04:49.196 [2024-12-07 05:19:52.417914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.139 05:19:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:50.139 05:19:53 -- common/autotest_common.sh@862 -- # return 0 00:04:50.139 05:19:53 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:50.139 05:19:53 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:50.139 05:19:53 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:50.139 05:19:53 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:50.139 05:19:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:50.139 05:19:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.139 05:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:50.140 ************************************ 00:04:50.140 START TEST rpc_integrity 00:04:50.140 ************************************ 00:04:50.140 05:19:53 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:50.140 05:19:53 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:50.140 05:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.140 05:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:50.140 05:19:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.140 05:19:53 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:50.140 05:19:53 -- rpc/rpc.sh@13 -- # jq length 00:04:50.140 05:19:53 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:50.140 05:19:53 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:50.140 05:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.140 05:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:50.140 05:19:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.140 05:19:53 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:50.140 05:19:53 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:50.140 05:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.140 05:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:50.140 05:19:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.140 05:19:53 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:50.140 { 00:04:50.140 "name": "Malloc0", 00:04:50.140 "aliases": [ 00:04:50.140 "ea2ad927-7622-4d99-9824-32df036b4f85" 00:04:50.140 ], 00:04:50.140 "product_name": "Malloc disk", 00:04:50.140 "block_size": 512, 00:04:50.140 "num_blocks": 16384, 00:04:50.140 "uuid": "ea2ad927-7622-4d99-9824-32df036b4f85", 00:04:50.140 "assigned_rate_limits": { 00:04:50.140 "rw_ios_per_sec": 0, 00:04:50.140 "rw_mbytes_per_sec": 0, 00:04:50.140 "r_mbytes_per_sec": 0, 00:04:50.140 "w_mbytes_per_sec": 0 00:04:50.140 }, 00:04:50.140 "claimed": false, 00:04:50.140 "zoned": false, 00:04:50.140 "supported_io_types": { 00:04:50.140 "read": true, 00:04:50.140 "write": true, 00:04:50.140 "unmap": true, 00:04:50.140 "write_zeroes": true, 00:04:50.140 "flush": true, 00:04:50.140 "reset": true, 00:04:50.140 "compare": false, 00:04:50.140 "compare_and_write": false, 00:04:50.140 "abort": true, 00:04:50.140 "nvme_admin": false, 00:04:50.140 "nvme_io": false 00:04:50.140 }, 00:04:50.140 "memory_domains": [ 00:04:50.140 { 00:04:50.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.140 "dma_device_type": 2 00:04:50.140 } 00:04:50.140 ], 00:04:50.140 "driver_specific": {} 00:04:50.140 } 00:04:50.140 ]' 00:04:50.140 05:19:53 -- rpc/rpc.sh@17 -- # jq length 00:04:50.140 05:19:53 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:50.140 05:19:53 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:50.140 05:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.140 05:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:50.140 [2024-12-07 05:19:53.199672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:50.140 [2024-12-07 05:19:53.199725] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:50.140 [2024-12-07 05:19:53.199742] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd275c0 00:04:50.140 [2024-12-07 05:19:53.199750] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:50.140 [2024-12-07 05:19:53.201336] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:50.140 [2024-12-07 05:19:53.201374] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:50.140 Passthru0 00:04:50.140 05:19:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.140 05:19:53 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:50.140 05:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.140 05:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:50.140 05:19:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.140 05:19:53 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:50.140 { 00:04:50.140 "name": "Malloc0", 00:04:50.140 "aliases": [ 00:04:50.140 "ea2ad927-7622-4d99-9824-32df036b4f85" 00:04:50.140 ], 00:04:50.140 "product_name": "Malloc disk", 00:04:50.140 "block_size": 512, 00:04:50.140 "num_blocks": 16384, 00:04:50.140 "uuid": "ea2ad927-7622-4d99-9824-32df036b4f85", 00:04:50.140 "assigned_rate_limits": { 00:04:50.140 "rw_ios_per_sec": 0, 00:04:50.140 "rw_mbytes_per_sec": 0, 00:04:50.140 "r_mbytes_per_sec": 0, 00:04:50.140 "w_mbytes_per_sec": 0 00:04:50.140 }, 00:04:50.140 "claimed": true, 00:04:50.140 "claim_type": "exclusive_write", 00:04:50.140 "zoned": false, 00:04:50.140 "supported_io_types": { 00:04:50.140 "read": true, 00:04:50.140 "write": true, 00:04:50.140 "unmap": true, 00:04:50.140 "write_zeroes": true, 00:04:50.140 "flush": true, 00:04:50.140 "reset": true, 00:04:50.140 "compare": false, 00:04:50.140 "compare_and_write": false, 00:04:50.140 "abort": true, 00:04:50.140 "nvme_admin": false, 00:04:50.140 "nvme_io": false 00:04:50.140 }, 00:04:50.140 "memory_domains": [ 00:04:50.140 { 00:04:50.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.140 "dma_device_type": 2 00:04:50.140 } 00:04:50.140 ], 00:04:50.140 "driver_specific": {} 00:04:50.140 }, 00:04:50.140 { 00:04:50.140 "name": "Passthru0", 00:04:50.140 "aliases": [ 00:04:50.140 "4fdeda93-5036-5430-af3c-09268ec4a7ca" 00:04:50.140 ], 00:04:50.140 "product_name": "passthru", 00:04:50.140 "block_size": 512, 00:04:50.140 "num_blocks": 16384, 00:04:50.140 "uuid": "4fdeda93-5036-5430-af3c-09268ec4a7ca", 00:04:50.140 "assigned_rate_limits": { 00:04:50.140 "rw_ios_per_sec": 0, 00:04:50.140 "rw_mbytes_per_sec": 0, 00:04:50.140 "r_mbytes_per_sec": 0, 00:04:50.140 "w_mbytes_per_sec": 0 00:04:50.140 }, 00:04:50.140 "claimed": false, 00:04:50.140 "zoned": false, 00:04:50.140 "supported_io_types": { 00:04:50.140 "read": true, 00:04:50.140 "write": true, 00:04:50.140 "unmap": true, 00:04:50.140 "write_zeroes": true, 00:04:50.140 "flush": true, 00:04:50.140 "reset": true, 00:04:50.140 "compare": false, 00:04:50.140 "compare_and_write": false, 00:04:50.140 "abort": true, 00:04:50.140 "nvme_admin": false, 00:04:50.140 "nvme_io": false 00:04:50.140 }, 00:04:50.140 "memory_domains": [ 00:04:50.140 { 00:04:50.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.140 "dma_device_type": 2 00:04:50.140 } 00:04:50.140 ], 00:04:50.140 "driver_specific": { 00:04:50.140 "passthru": { 00:04:50.140 "name": "Passthru0", 00:04:50.140 "base_bdev_name": "Malloc0" 00:04:50.140 } 00:04:50.140 } 00:04:50.140 } 00:04:50.140 ]' 00:04:50.140 05:19:53 -- rpc/rpc.sh@21 -- # jq length 00:04:50.140 05:19:53 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:50.140 05:19:53 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:50.140 05:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.140 05:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:50.140 05:19:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.140 05:19:53 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:50.140 05:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.140 05:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:50.140 05:19:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.140 05:19:53 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:50.140 05:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.140 05:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:50.140 05:19:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.140 05:19:53 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:50.140 05:19:53 -- rpc/rpc.sh@26 -- # jq length 00:04:50.140 05:19:53 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:50.140 00:04:50.140 real 0m0.292s 00:04:50.140 user 0m0.183s 00:04:50.140 sys 0m0.042s 00:04:50.140 05:19:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:50.140 05:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:50.140 ************************************ 00:04:50.140 END TEST rpc_integrity 00:04:50.140 ************************************ 00:04:50.403 05:19:53 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:50.403 05:19:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:50.403 05:19:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.403 05:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:50.403 ************************************ 00:04:50.403 START TEST rpc_plugins 00:04:50.403 ************************************ 00:04:50.403 05:19:53 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:04:50.403 05:19:53 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:50.403 05:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.403 05:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:50.403 05:19:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.403 05:19:53 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:50.403 05:19:53 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:50.403 05:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.403 05:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:50.403 05:19:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.403 05:19:53 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:50.403 { 00:04:50.403 "name": "Malloc1", 00:04:50.403 "aliases": [ 00:04:50.403 "deebb46b-abca-4854-947b-716a9a607ed7" 00:04:50.403 ], 00:04:50.403 "product_name": "Malloc disk", 00:04:50.403 "block_size": 4096, 00:04:50.403 "num_blocks": 256, 00:04:50.403 "uuid": "deebb46b-abca-4854-947b-716a9a607ed7", 00:04:50.403 "assigned_rate_limits": { 00:04:50.403 "rw_ios_per_sec": 0, 00:04:50.403 "rw_mbytes_per_sec": 0, 00:04:50.403 "r_mbytes_per_sec": 0, 00:04:50.403 "w_mbytes_per_sec": 0 00:04:50.403 }, 00:04:50.403 "claimed": false, 00:04:50.403 "zoned": false, 00:04:50.403 "supported_io_types": { 00:04:50.403 "read": true, 00:04:50.403 "write": true, 00:04:50.403 "unmap": true, 00:04:50.403 "write_zeroes": true, 00:04:50.403 "flush": true, 00:04:50.403 "reset": true, 00:04:50.403 "compare": false, 00:04:50.403 "compare_and_write": false, 00:04:50.403 "abort": true, 00:04:50.403 "nvme_admin": false, 00:04:50.403 "nvme_io": false 00:04:50.403 }, 00:04:50.403 "memory_domains": [ 00:04:50.403 { 00:04:50.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.403 "dma_device_type": 2 00:04:50.403 } 00:04:50.403 ], 00:04:50.403 "driver_specific": {} 00:04:50.403 } 00:04:50.403 ]' 00:04:50.403 05:19:53 -- rpc/rpc.sh@32 -- # jq length 00:04:50.403 05:19:53 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:50.403 05:19:53 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:50.403 05:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.403 05:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:50.403 05:19:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.403 05:19:53 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:50.403 05:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.403 05:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:50.403 05:19:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.403 05:19:53 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:50.403 05:19:53 -- rpc/rpc.sh@36 -- # jq length 00:04:50.403 05:19:53 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:50.403 00:04:50.403 real 0m0.136s 00:04:50.403 user 0m0.081s 00:04:50.403 sys 0m0.021s 00:04:50.403 05:19:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:50.403 05:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:50.403 ************************************ 00:04:50.403 END TEST rpc_plugins 00:04:50.403 ************************************ 00:04:50.403 05:19:53 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:50.403 05:19:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:50.403 05:19:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.403 05:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:50.403 ************************************ 00:04:50.403 START TEST rpc_trace_cmd_test 00:04:50.403 ************************************ 00:04:50.403 05:19:53 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:04:50.403 05:19:53 -- rpc/rpc.sh@40 -- # local info 00:04:50.403 05:19:53 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:50.403 05:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.403 05:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:50.403 05:19:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.403 05:19:53 -- rpc/rpc.sh@42 -- # info='{ 00:04:50.403 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1600071", 00:04:50.403 "tpoint_group_mask": "0x8", 00:04:50.403 "iscsi_conn": { 00:04:50.403 "mask": "0x2", 00:04:50.403 "tpoint_mask": "0x0" 00:04:50.403 }, 00:04:50.403 "scsi": { 00:04:50.403 "mask": "0x4", 00:04:50.403 "tpoint_mask": "0x0" 00:04:50.403 }, 00:04:50.403 "bdev": { 00:04:50.403 "mask": "0x8", 00:04:50.403 "tpoint_mask": "0xffffffffffffffff" 00:04:50.403 }, 00:04:50.403 "nvmf_rdma": { 00:04:50.403 "mask": "0x10", 00:04:50.403 "tpoint_mask": "0x0" 00:04:50.403 }, 00:04:50.403 "nvmf_tcp": { 00:04:50.403 "mask": "0x20", 00:04:50.403 "tpoint_mask": "0x0" 00:04:50.403 }, 00:04:50.403 "ftl": { 00:04:50.403 "mask": "0x40", 00:04:50.403 "tpoint_mask": "0x0" 00:04:50.403 }, 00:04:50.403 "blobfs": { 00:04:50.403 "mask": "0x80", 00:04:50.403 "tpoint_mask": "0x0" 00:04:50.403 }, 00:04:50.403 "dsa": { 00:04:50.403 "mask": "0x200", 00:04:50.403 "tpoint_mask": "0x0" 00:04:50.403 }, 00:04:50.403 "thread": { 00:04:50.403 "mask": "0x400", 00:04:50.403 "tpoint_mask": "0x0" 00:04:50.403 }, 00:04:50.403 "nvme_pcie": { 00:04:50.403 "mask": "0x800", 00:04:50.403 "tpoint_mask": "0x0" 00:04:50.403 }, 00:04:50.403 "iaa": { 00:04:50.403 "mask": "0x1000", 00:04:50.403 "tpoint_mask": "0x0" 00:04:50.403 }, 00:04:50.403 "nvme_tcp": { 00:04:50.403 "mask": "0x2000", 00:04:50.403 "tpoint_mask": "0x0" 00:04:50.403 }, 00:04:50.403 "bdev_nvme": { 00:04:50.403 "mask": "0x4000", 00:04:50.403 "tpoint_mask": "0x0" 00:04:50.403 } 00:04:50.403 }' 00:04:50.403 05:19:53 -- rpc/rpc.sh@43 -- # jq length 00:04:50.665 05:19:53 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:50.665 05:19:53 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:50.665 05:19:53 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:50.665 05:19:53 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:50.665 05:19:53 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:50.665 05:19:53 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:50.665 05:19:53 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:50.665 05:19:53 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:50.665 05:19:53 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:50.665 00:04:50.665 real 0m0.231s 00:04:50.665 user 0m0.186s 00:04:50.665 sys 0m0.037s 00:04:50.665 05:19:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:50.665 05:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:50.665 ************************************ 00:04:50.665 END TEST rpc_trace_cmd_test 00:04:50.665 ************************************ 00:04:50.665 05:19:53 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:50.665 05:19:53 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:50.665 05:19:53 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:50.665 05:19:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:50.665 05:19:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.665 05:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:50.665 ************************************ 00:04:50.665 START TEST rpc_daemon_integrity 00:04:50.665 ************************************ 00:04:50.665 05:19:53 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:50.665 05:19:53 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:50.665 05:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.665 05:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:50.665 05:19:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.665 05:19:53 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:50.665 05:19:53 -- rpc/rpc.sh@13 -- # jq length 00:04:50.926 05:19:53 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:50.926 05:19:53 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:50.926 05:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.926 05:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:50.926 05:19:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.926 05:19:53 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:50.926 05:19:53 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:50.926 05:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.926 05:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:50.926 05:19:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.926 05:19:53 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:50.926 { 00:04:50.926 "name": "Malloc2", 00:04:50.926 "aliases": [ 00:04:50.926 "a36a04b8-4f2d-476f-b6d5-37133e127ff6" 00:04:50.926 ], 00:04:50.926 "product_name": "Malloc disk", 00:04:50.926 "block_size": 512, 00:04:50.926 "num_blocks": 16384, 00:04:50.926 "uuid": "a36a04b8-4f2d-476f-b6d5-37133e127ff6", 00:04:50.926 "assigned_rate_limits": { 00:04:50.926 "rw_ios_per_sec": 0, 00:04:50.926 "rw_mbytes_per_sec": 0, 00:04:50.926 "r_mbytes_per_sec": 0, 00:04:50.926 "w_mbytes_per_sec": 0 00:04:50.926 }, 00:04:50.926 "claimed": false, 00:04:50.926 "zoned": false, 00:04:50.926 "supported_io_types": { 00:04:50.926 "read": true, 00:04:50.926 "write": true, 00:04:50.926 "unmap": true, 00:04:50.926 "write_zeroes": true, 00:04:50.926 "flush": true, 00:04:50.926 "reset": true, 00:04:50.926 "compare": false, 00:04:50.926 "compare_and_write": false, 00:04:50.926 "abort": true, 00:04:50.926 "nvme_admin": false, 00:04:50.926 "nvme_io": false 00:04:50.926 }, 00:04:50.926 "memory_domains": [ 00:04:50.926 { 00:04:50.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.927 "dma_device_type": 2 00:04:50.927 } 00:04:50.927 ], 00:04:50.927 "driver_specific": {} 00:04:50.927 } 00:04:50.927 ]' 00:04:50.927 05:19:53 -- rpc/rpc.sh@17 -- # jq length 00:04:50.927 05:19:53 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:50.927 05:19:53 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:50.927 05:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.927 05:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:50.927 [2024-12-07 05:19:53.993828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:50.927 [2024-12-07 05:19:53.993873] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:50.927 [2024-12-07 05:19:53.993888] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd28dc0 00:04:50.927 [2024-12-07 05:19:53.993896] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:50.927 [2024-12-07 05:19:53.995270] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:50.927 [2024-12-07 05:19:53.995305] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:50.927 Passthru0 00:04:50.927 05:19:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.927 05:19:53 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:50.927 05:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.927 05:19:54 -- common/autotest_common.sh@10 -- # set +x 00:04:50.927 05:19:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.927 05:19:54 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:50.927 { 00:04:50.927 "name": "Malloc2", 00:04:50.927 "aliases": [ 00:04:50.927 "a36a04b8-4f2d-476f-b6d5-37133e127ff6" 00:04:50.927 ], 00:04:50.927 "product_name": "Malloc disk", 00:04:50.927 "block_size": 512, 00:04:50.927 "num_blocks": 16384, 00:04:50.927 "uuid": "a36a04b8-4f2d-476f-b6d5-37133e127ff6", 00:04:50.927 "assigned_rate_limits": { 00:04:50.927 "rw_ios_per_sec": 0, 00:04:50.927 "rw_mbytes_per_sec": 0, 00:04:50.927 "r_mbytes_per_sec": 0, 00:04:50.927 "w_mbytes_per_sec": 0 00:04:50.927 }, 00:04:50.927 "claimed": true, 00:04:50.927 "claim_type": "exclusive_write", 00:04:50.927 "zoned": false, 00:04:50.927 "supported_io_types": { 00:04:50.927 "read": true, 00:04:50.927 "write": true, 00:04:50.927 "unmap": true, 00:04:50.927 "write_zeroes": true, 00:04:50.927 "flush": true, 00:04:50.927 "reset": true, 00:04:50.927 "compare": false, 00:04:50.927 "compare_and_write": false, 00:04:50.927 "abort": true, 00:04:50.927 "nvme_admin": false, 00:04:50.927 "nvme_io": false 00:04:50.927 }, 00:04:50.927 "memory_domains": [ 00:04:50.927 { 00:04:50.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.927 "dma_device_type": 2 00:04:50.927 } 00:04:50.927 ], 00:04:50.927 "driver_specific": {} 00:04:50.927 }, 00:04:50.927 { 00:04:50.927 "name": "Passthru0", 00:04:50.927 "aliases": [ 00:04:50.927 "4013a52f-f8cb-560e-9b0a-acd2de650588" 00:04:50.927 ], 00:04:50.927 "product_name": "passthru", 00:04:50.927 "block_size": 512, 00:04:50.927 "num_blocks": 16384, 00:04:50.927 "uuid": "4013a52f-f8cb-560e-9b0a-acd2de650588", 00:04:50.927 "assigned_rate_limits": { 00:04:50.927 "rw_ios_per_sec": 0, 00:04:50.927 "rw_mbytes_per_sec": 0, 00:04:50.927 "r_mbytes_per_sec": 0, 00:04:50.927 "w_mbytes_per_sec": 0 00:04:50.927 }, 00:04:50.927 "claimed": false, 00:04:50.927 "zoned": false, 00:04:50.927 "supported_io_types": { 00:04:50.927 "read": true, 00:04:50.927 "write": true, 00:04:50.927 "unmap": true, 00:04:50.927 "write_zeroes": true, 00:04:50.927 "flush": true, 00:04:50.927 "reset": true, 00:04:50.927 "compare": false, 00:04:50.927 "compare_and_write": false, 00:04:50.927 "abort": true, 00:04:50.927 "nvme_admin": false, 00:04:50.927 "nvme_io": false 00:04:50.927 }, 00:04:50.927 "memory_domains": [ 00:04:50.927 { 00:04:50.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.927 "dma_device_type": 2 00:04:50.927 } 00:04:50.927 ], 00:04:50.927 "driver_specific": { 00:04:50.927 "passthru": { 00:04:50.927 "name": "Passthru0", 00:04:50.927 "base_bdev_name": "Malloc2" 00:04:50.927 } 00:04:50.927 } 00:04:50.927 } 00:04:50.927 ]' 00:04:50.927 05:19:54 -- rpc/rpc.sh@21 -- # jq length 00:04:50.927 05:19:54 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:50.927 05:19:54 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:50.927 05:19:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.927 05:19:54 -- common/autotest_common.sh@10 -- # set +x 00:04:50.927 05:19:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.927 05:19:54 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:50.927 05:19:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.927 05:19:54 -- common/autotest_common.sh@10 -- # set +x 00:04:50.927 05:19:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.927 05:19:54 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:50.927 05:19:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.927 05:19:54 -- common/autotest_common.sh@10 -- # set +x 00:04:50.927 05:19:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.927 05:19:54 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:50.927 05:19:54 -- rpc/rpc.sh@26 -- # jq length 00:04:50.927 05:19:54 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:50.927 00:04:50.927 real 0m0.289s 00:04:50.927 user 0m0.181s 00:04:50.927 sys 0m0.043s 00:04:50.927 05:19:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:50.927 05:19:54 -- common/autotest_common.sh@10 -- # set +x 00:04:50.927 ************************************ 00:04:50.927 END TEST rpc_daemon_integrity 00:04:50.927 ************************************ 00:04:51.188 05:19:54 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:51.188 05:19:54 -- rpc/rpc.sh@84 -- # killprocess 1600071 00:04:51.188 05:19:54 -- common/autotest_common.sh@936 -- # '[' -z 1600071 ']' 00:04:51.188 05:19:54 -- common/autotest_common.sh@940 -- # kill -0 1600071 00:04:51.188 05:19:54 -- common/autotest_common.sh@941 -- # uname 00:04:51.188 05:19:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:51.188 05:19:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1600071 00:04:51.188 05:19:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:51.188 05:19:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:51.188 05:19:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1600071' 00:04:51.188 killing process with pid 1600071 00:04:51.188 05:19:54 -- common/autotest_common.sh@955 -- # kill 1600071 00:04:51.188 05:19:54 -- common/autotest_common.sh@960 -- # wait 1600071 00:04:51.449 00:04:51.449 real 0m2.504s 00:04:51.449 user 0m3.137s 00:04:51.449 sys 0m0.768s 00:04:51.449 05:19:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:51.449 05:19:54 -- common/autotest_common.sh@10 -- # set +x 00:04:51.449 ************************************ 00:04:51.449 END TEST rpc 00:04:51.449 ************************************ 00:04:51.449 05:19:54 -- spdk/autotest.sh@164 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:51.449 05:19:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:51.449 05:19:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.449 05:19:54 -- common/autotest_common.sh@10 -- # set +x 00:04:51.449 ************************************ 00:04:51.449 START TEST rpc_client 00:04:51.449 ************************************ 00:04:51.449 05:19:54 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:51.449 * Looking for test storage... 00:04:51.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:51.449 05:19:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:51.449 05:19:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:51.449 05:19:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:51.711 05:19:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:51.711 05:19:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:51.711 05:19:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:51.711 05:19:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:51.711 05:19:54 -- scripts/common.sh@335 -- # IFS=.-: 00:04:51.711 05:19:54 -- scripts/common.sh@335 -- # read -ra ver1 00:04:51.711 05:19:54 -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.711 05:19:54 -- scripts/common.sh@336 -- # read -ra ver2 00:04:51.711 05:19:54 -- scripts/common.sh@337 -- # local 'op=<' 00:04:51.711 05:19:54 -- scripts/common.sh@339 -- # ver1_l=2 00:04:51.711 05:19:54 -- scripts/common.sh@340 -- # ver2_l=1 00:04:51.711 05:19:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:51.711 05:19:54 -- scripts/common.sh@343 -- # case "$op" in 00:04:51.711 05:19:54 -- scripts/common.sh@344 -- # : 1 00:04:51.711 05:19:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:51.711 05:19:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.711 05:19:54 -- scripts/common.sh@364 -- # decimal 1 00:04:51.711 05:19:54 -- scripts/common.sh@352 -- # local d=1 00:04:51.711 05:19:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.711 05:19:54 -- scripts/common.sh@354 -- # echo 1 00:04:51.711 05:19:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:51.711 05:19:54 -- scripts/common.sh@365 -- # decimal 2 00:04:51.711 05:19:54 -- scripts/common.sh@352 -- # local d=2 00:04:51.711 05:19:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.711 05:19:54 -- scripts/common.sh@354 -- # echo 2 00:04:51.711 05:19:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:51.711 05:19:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:51.711 05:19:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:51.711 05:19:54 -- scripts/common.sh@367 -- # return 0 00:04:51.711 05:19:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.711 05:19:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:51.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.711 --rc genhtml_branch_coverage=1 00:04:51.711 --rc genhtml_function_coverage=1 00:04:51.711 --rc genhtml_legend=1 00:04:51.711 --rc geninfo_all_blocks=1 00:04:51.711 --rc geninfo_unexecuted_blocks=1 00:04:51.711 00:04:51.711 ' 00:04:51.711 05:19:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:51.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.711 --rc genhtml_branch_coverage=1 00:04:51.711 --rc genhtml_function_coverage=1 00:04:51.711 --rc genhtml_legend=1 00:04:51.711 --rc geninfo_all_blocks=1 00:04:51.711 --rc geninfo_unexecuted_blocks=1 00:04:51.711 00:04:51.711 ' 00:04:51.711 05:19:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:51.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.711 --rc genhtml_branch_coverage=1 00:04:51.711 --rc genhtml_function_coverage=1 00:04:51.711 --rc genhtml_legend=1 00:04:51.711 --rc geninfo_all_blocks=1 00:04:51.711 --rc geninfo_unexecuted_blocks=1 00:04:51.711 00:04:51.711 ' 00:04:51.711 05:19:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:51.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.711 --rc genhtml_branch_coverage=1 00:04:51.711 --rc genhtml_function_coverage=1 00:04:51.711 --rc genhtml_legend=1 00:04:51.711 --rc geninfo_all_blocks=1 00:04:51.711 --rc geninfo_unexecuted_blocks=1 00:04:51.711 00:04:51.711 ' 00:04:51.711 05:19:54 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:51.711 OK 00:04:51.711 05:19:54 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:51.711 00:04:51.711 real 0m0.223s 00:04:51.711 user 0m0.121s 00:04:51.711 sys 0m0.115s 00:04:51.711 05:19:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:51.711 05:19:54 -- common/autotest_common.sh@10 -- # set +x 00:04:51.711 ************************************ 00:04:51.711 END TEST rpc_client 00:04:51.711 ************************************ 00:04:51.711 05:19:54 -- spdk/autotest.sh@165 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:51.711 05:19:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:51.711 05:19:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.711 05:19:54 -- common/autotest_common.sh@10 -- # set +x 00:04:51.711 ************************************ 00:04:51.711 START TEST json_config 00:04:51.711 ************************************ 00:04:51.711 05:19:54 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:51.711 05:19:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:51.711 05:19:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:51.711 05:19:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:51.974 05:19:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:51.974 05:19:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:51.974 05:19:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:51.974 05:19:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:51.974 05:19:54 -- scripts/common.sh@335 -- # IFS=.-: 00:04:51.974 05:19:54 -- scripts/common.sh@335 -- # read -ra ver1 00:04:51.974 05:19:54 -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.974 05:19:54 -- scripts/common.sh@336 -- # read -ra ver2 00:04:51.974 05:19:54 -- scripts/common.sh@337 -- # local 'op=<' 00:04:51.974 05:19:54 -- scripts/common.sh@339 -- # ver1_l=2 00:04:51.974 05:19:54 -- scripts/common.sh@340 -- # ver2_l=1 00:04:51.974 05:19:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:51.974 05:19:54 -- scripts/common.sh@343 -- # case "$op" in 00:04:51.974 05:19:54 -- scripts/common.sh@344 -- # : 1 00:04:51.974 05:19:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:51.974 05:19:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.974 05:19:54 -- scripts/common.sh@364 -- # decimal 1 00:04:51.974 05:19:54 -- scripts/common.sh@352 -- # local d=1 00:04:51.974 05:19:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.974 05:19:54 -- scripts/common.sh@354 -- # echo 1 00:04:51.974 05:19:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:51.974 05:19:54 -- scripts/common.sh@365 -- # decimal 2 00:04:51.974 05:19:54 -- scripts/common.sh@352 -- # local d=2 00:04:51.974 05:19:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.974 05:19:54 -- scripts/common.sh@354 -- # echo 2 00:04:51.974 05:19:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:51.974 05:19:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:51.974 05:19:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:51.974 05:19:54 -- scripts/common.sh@367 -- # return 0 00:04:51.974 05:19:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.974 05:19:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:51.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.974 --rc genhtml_branch_coverage=1 00:04:51.974 --rc genhtml_function_coverage=1 00:04:51.974 --rc genhtml_legend=1 00:04:51.974 --rc geninfo_all_blocks=1 00:04:51.974 --rc geninfo_unexecuted_blocks=1 00:04:51.974 00:04:51.974 ' 00:04:51.974 05:19:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:51.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.974 --rc genhtml_branch_coverage=1 00:04:51.974 --rc genhtml_function_coverage=1 00:04:51.974 --rc genhtml_legend=1 00:04:51.974 --rc geninfo_all_blocks=1 00:04:51.974 --rc geninfo_unexecuted_blocks=1 00:04:51.974 00:04:51.974 ' 00:04:51.974 05:19:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:51.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.974 --rc genhtml_branch_coverage=1 00:04:51.974 --rc genhtml_function_coverage=1 00:04:51.974 --rc genhtml_legend=1 00:04:51.974 --rc geninfo_all_blocks=1 00:04:51.974 --rc geninfo_unexecuted_blocks=1 00:04:51.974 00:04:51.974 ' 00:04:51.974 05:19:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:51.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.974 --rc genhtml_branch_coverage=1 00:04:51.974 --rc genhtml_function_coverage=1 00:04:51.974 --rc genhtml_legend=1 00:04:51.974 --rc geninfo_all_blocks=1 00:04:51.974 --rc geninfo_unexecuted_blocks=1 00:04:51.974 00:04:51.974 ' 00:04:51.974 05:19:54 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:51.974 05:19:54 -- nvmf/common.sh@7 -- # uname -s 00:04:51.974 05:19:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:51.974 05:19:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:51.974 05:19:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:51.974 05:19:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:51.974 05:19:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:51.974 05:19:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:51.974 05:19:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:51.974 05:19:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:51.974 05:19:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:51.974 05:19:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:51.974 05:19:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:51.974 05:19:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:51.974 05:19:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:51.974 05:19:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:51.974 05:19:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:51.974 05:19:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:51.974 05:19:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:51.974 05:19:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:51.974 05:19:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:51.974 05:19:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.974 05:19:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.974 05:19:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.974 05:19:55 -- paths/export.sh@5 -- # export PATH 00:04:51.974 05:19:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.974 05:19:55 -- nvmf/common.sh@46 -- # : 0 00:04:51.974 05:19:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:51.974 05:19:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:51.974 05:19:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:51.974 05:19:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:51.974 05:19:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:51.974 05:19:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:51.974 05:19:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:51.974 05:19:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:51.974 05:19:55 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:51.974 05:19:55 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:51.974 05:19:55 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:51.974 05:19:55 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:51.974 05:19:55 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:04:51.974 05:19:55 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:04:51.974 05:19:55 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:51.974 05:19:55 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:04:51.974 05:19:55 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:51.974 05:19:55 -- json_config/json_config.sh@32 -- # declare -A app_params 00:04:51.975 05:19:55 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:51.975 05:19:55 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:04:51.975 05:19:55 -- json_config/json_config.sh@43 -- # last_event_id=0 00:04:51.975 05:19:55 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:51.975 05:19:55 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:04:51.975 INFO: JSON configuration test init 00:04:51.975 05:19:55 -- json_config/json_config.sh@420 -- # json_config_test_init 00:04:51.975 05:19:55 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:04:51.975 05:19:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:51.975 05:19:55 -- common/autotest_common.sh@10 -- # set +x 00:04:51.975 05:19:55 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:04:51.975 05:19:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:51.975 05:19:55 -- common/autotest_common.sh@10 -- # set +x 00:04:51.975 05:19:55 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:04:51.975 05:19:55 -- json_config/json_config.sh@98 -- # local app=target 00:04:51.975 05:19:55 -- json_config/json_config.sh@99 -- # shift 00:04:51.975 05:19:55 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:51.975 05:19:55 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:51.975 05:19:55 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:51.975 05:19:55 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:51.975 05:19:55 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:51.975 05:19:55 -- json_config/json_config.sh@111 -- # app_pid[$app]=1600962 00:04:51.975 05:19:55 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:51.975 Waiting for target to run... 00:04:51.975 05:19:55 -- json_config/json_config.sh@114 -- # waitforlisten 1600962 /var/tmp/spdk_tgt.sock 00:04:51.975 05:19:55 -- common/autotest_common.sh@829 -- # '[' -z 1600962 ']' 00:04:51.975 05:19:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:51.975 05:19:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.975 05:19:55 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:51.975 05:19:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:51.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:51.975 05:19:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.975 05:19:55 -- common/autotest_common.sh@10 -- # set +x 00:04:51.975 [2024-12-07 05:19:55.079493] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:51.975 [2024-12-07 05:19:55.079574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1600962 ] 00:04:51.975 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.236 [2024-12-07 05:19:55.372709] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.236 [2024-12-07 05:19:55.445353] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:52.236 [2024-12-07 05:19:55.445518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.806 05:19:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:52.806 05:19:55 -- common/autotest_common.sh@862 -- # return 0 00:04:52.806 05:19:55 -- json_config/json_config.sh@115 -- # echo '' 00:04:52.806 00:04:52.806 05:19:55 -- json_config/json_config.sh@322 -- # create_accel_config 00:04:52.806 05:19:55 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:04:52.806 05:19:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:52.806 05:19:55 -- common/autotest_common.sh@10 -- # set +x 00:04:52.806 05:19:55 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:04:52.806 05:19:55 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:04:52.806 05:19:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:52.806 05:19:55 -- common/autotest_common.sh@10 -- # set +x 00:04:52.806 05:19:55 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:52.806 05:19:55 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:04:52.806 05:19:55 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:53.378 05:19:56 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:04:53.378 05:19:56 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:04:53.378 05:19:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:53.378 05:19:56 -- common/autotest_common.sh@10 -- # set +x 00:04:53.378 05:19:56 -- json_config/json_config.sh@48 -- # local ret=0 00:04:53.378 05:19:56 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:53.378 05:19:56 -- json_config/json_config.sh@49 -- # local enabled_types 00:04:53.378 05:19:56 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:53.378 05:19:56 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:53.378 05:19:56 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:53.638 05:19:56 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:53.638 05:19:56 -- json_config/json_config.sh@51 -- # local get_types 00:04:53.638 05:19:56 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:53.638 05:19:56 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:04:53.638 05:19:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:53.638 05:19:56 -- common/autotest_common.sh@10 -- # set +x 00:04:53.638 05:19:56 -- json_config/json_config.sh@58 -- # return 0 00:04:53.638 05:19:56 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:04:53.638 05:19:56 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:04:53.638 05:19:56 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:04:53.638 05:19:56 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:04:53.638 05:19:56 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:04:53.638 05:19:56 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:04:53.638 05:19:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:53.638 05:19:56 -- common/autotest_common.sh@10 -- # set +x 00:04:53.638 05:19:56 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:53.638 05:19:56 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:04:53.638 05:19:56 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:04:53.638 05:19:56 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:53.638 05:19:56 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:53.899 MallocForNvmf0 00:04:53.899 05:19:56 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:53.899 05:19:56 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:53.899 MallocForNvmf1 00:04:53.899 05:19:57 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:53.899 05:19:57 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:54.160 [2024-12-07 05:19:57.262052] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:54.160 05:19:57 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:54.160 05:19:57 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:54.421 05:19:57 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:54.421 05:19:57 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:54.681 05:19:57 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:54.681 05:19:57 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:54.682 05:19:57 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:54.682 05:19:57 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:54.942 [2024-12-07 05:19:58.004679] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:54.942 05:19:58 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:04:54.942 05:19:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:54.942 05:19:58 -- common/autotest_common.sh@10 -- # set +x 00:04:54.942 05:19:58 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:04:54.942 05:19:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:54.942 05:19:58 -- common/autotest_common.sh@10 -- # set +x 00:04:54.942 05:19:58 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:04:54.942 05:19:58 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:54.942 05:19:58 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:55.203 MallocBdevForConfigChangeCheck 00:04:55.203 05:19:58 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:04:55.203 05:19:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:55.203 05:19:58 -- common/autotest_common.sh@10 -- # set +x 00:04:55.203 05:19:58 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:04:55.203 05:19:58 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:55.464 05:19:58 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:04:55.464 INFO: shutting down applications... 00:04:55.464 05:19:58 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:04:55.464 05:19:58 -- json_config/json_config.sh@431 -- # json_config_clear target 00:04:55.464 05:19:58 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:04:55.464 05:19:58 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:56.036 Calling clear_iscsi_subsystem 00:04:56.036 Calling clear_nvmf_subsystem 00:04:56.036 Calling clear_nbd_subsystem 00:04:56.036 Calling clear_ublk_subsystem 00:04:56.036 Calling clear_vhost_blk_subsystem 00:04:56.036 Calling clear_vhost_scsi_subsystem 00:04:56.036 Calling clear_scheduler_subsystem 00:04:56.036 Calling clear_bdev_subsystem 00:04:56.036 Calling clear_accel_subsystem 00:04:56.036 Calling clear_vmd_subsystem 00:04:56.036 Calling clear_sock_subsystem 00:04:56.036 Calling clear_iobuf_subsystem 00:04:56.036 05:19:59 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:56.036 05:19:59 -- json_config/json_config.sh@396 -- # count=100 00:04:56.036 05:19:59 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:04:56.036 05:19:59 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:56.036 05:19:59 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:56.036 05:19:59 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:56.297 05:19:59 -- json_config/json_config.sh@398 -- # break 00:04:56.297 05:19:59 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:04:56.297 05:19:59 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:04:56.297 05:19:59 -- json_config/json_config.sh@120 -- # local app=target 00:04:56.297 05:19:59 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:04:56.297 05:19:59 -- json_config/json_config.sh@124 -- # [[ -n 1600962 ]] 00:04:56.297 05:19:59 -- json_config/json_config.sh@127 -- # kill -SIGINT 1600962 00:04:56.297 05:19:59 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:04:56.297 05:19:59 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:56.297 05:19:59 -- json_config/json_config.sh@130 -- # kill -0 1600962 00:04:56.297 05:19:59 -- json_config/json_config.sh@134 -- # sleep 0.5 00:04:56.866 05:19:59 -- json_config/json_config.sh@129 -- # (( i++ )) 00:04:56.866 05:19:59 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:56.866 05:19:59 -- json_config/json_config.sh@130 -- # kill -0 1600962 00:04:56.866 05:19:59 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:04:56.866 05:19:59 -- json_config/json_config.sh@132 -- # break 00:04:56.866 05:19:59 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:04:56.866 05:19:59 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:04:56.866 SPDK target shutdown done 00:04:56.866 05:19:59 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:04:56.866 INFO: relaunching applications... 00:04:56.866 05:19:59 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:56.866 05:19:59 -- json_config/json_config.sh@98 -- # local app=target 00:04:56.866 05:19:59 -- json_config/json_config.sh@99 -- # shift 00:04:56.866 05:19:59 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:56.866 05:19:59 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:56.866 05:19:59 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:56.866 05:19:59 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:56.866 05:19:59 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:56.866 05:19:59 -- json_config/json_config.sh@111 -- # app_pid[$app]=1602003 00:04:56.866 05:19:59 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:56.866 Waiting for target to run... 00:04:56.866 05:19:59 -- json_config/json_config.sh@114 -- # waitforlisten 1602003 /var/tmp/spdk_tgt.sock 00:04:56.866 05:19:59 -- common/autotest_common.sh@829 -- # '[' -z 1602003 ']' 00:04:56.866 05:19:59 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:56.866 05:19:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:56.866 05:19:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:56.866 05:19:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:56.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:56.866 05:19:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:56.866 05:19:59 -- common/autotest_common.sh@10 -- # set +x 00:04:56.866 [2024-12-07 05:19:59.956138] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:56.866 [2024-12-07 05:19:59.956208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1602003 ] 00:04:56.866 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.125 [2024-12-07 05:20:00.342495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.384 [2024-12-07 05:20:00.385226] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:57.384 [2024-12-07 05:20:00.385326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.642 [2024-12-07 05:20:00.864440] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:57.900 [2024-12-07 05:20:00.896824] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:58.469 05:20:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.469 05:20:01 -- common/autotest_common.sh@862 -- # return 0 00:04:58.469 05:20:01 -- json_config/json_config.sh@115 -- # echo '' 00:04:58.469 00:04:58.469 05:20:01 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:04:58.469 05:20:01 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:58.469 INFO: Checking if target configuration is the same... 00:04:58.469 05:20:01 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:58.469 05:20:01 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:04:58.469 05:20:01 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:58.469 + '[' 2 -ne 2 ']' 00:04:58.469 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:58.469 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:58.469 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:58.469 +++ basename /dev/fd/62 00:04:58.469 ++ mktemp /tmp/62.XXX 00:04:58.469 + tmp_file_1=/tmp/62.7n5 00:04:58.469 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:58.469 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:58.469 + tmp_file_2=/tmp/spdk_tgt_config.json.pSw 00:04:58.469 + ret=0 00:04:58.469 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:58.469 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:58.727 + diff -u /tmp/62.7n5 /tmp/spdk_tgt_config.json.pSw 00:04:58.727 + echo 'INFO: JSON config files are the same' 00:04:58.728 INFO: JSON config files are the same 00:04:58.728 + rm /tmp/62.7n5 /tmp/spdk_tgt_config.json.pSw 00:04:58.728 + exit 0 00:04:58.728 05:20:01 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:04:58.728 05:20:01 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:58.728 INFO: changing configuration and checking if this can be detected... 00:04:58.728 05:20:01 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:58.728 05:20:01 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:58.728 05:20:01 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:58.728 05:20:01 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:04:58.728 05:20:01 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:58.728 + '[' 2 -ne 2 ']' 00:04:58.728 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:58.728 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:58.728 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:58.728 +++ basename /dev/fd/62 00:04:58.728 ++ mktemp /tmp/62.XXX 00:04:58.728 + tmp_file_1=/tmp/62.9uq 00:04:58.728 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:58.728 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:58.728 + tmp_file_2=/tmp/spdk_tgt_config.json.o3k 00:04:58.728 + ret=0 00:04:58.728 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:58.987 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:59.246 + diff -u /tmp/62.9uq /tmp/spdk_tgt_config.json.o3k 00:04:59.246 + ret=1 00:04:59.246 + echo '=== Start of file: /tmp/62.9uq ===' 00:04:59.246 + cat /tmp/62.9uq 00:04:59.246 + echo '=== End of file: /tmp/62.9uq ===' 00:04:59.246 + echo '' 00:04:59.246 + echo '=== Start of file: /tmp/spdk_tgt_config.json.o3k ===' 00:04:59.246 + cat /tmp/spdk_tgt_config.json.o3k 00:04:59.246 + echo '=== End of file: /tmp/spdk_tgt_config.json.o3k ===' 00:04:59.246 + echo '' 00:04:59.246 + rm /tmp/62.9uq /tmp/spdk_tgt_config.json.o3k 00:04:59.246 + exit 1 00:04:59.246 05:20:02 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:04:59.246 INFO: configuration change detected. 00:04:59.246 05:20:02 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:04:59.246 05:20:02 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:04:59.246 05:20:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:59.246 05:20:02 -- common/autotest_common.sh@10 -- # set +x 00:04:59.246 05:20:02 -- json_config/json_config.sh@360 -- # local ret=0 00:04:59.246 05:20:02 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:04:59.246 05:20:02 -- json_config/json_config.sh@370 -- # [[ -n 1602003 ]] 00:04:59.246 05:20:02 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:04:59.246 05:20:02 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:04:59.246 05:20:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:59.246 05:20:02 -- common/autotest_common.sh@10 -- # set +x 00:04:59.246 05:20:02 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:04:59.246 05:20:02 -- json_config/json_config.sh@246 -- # uname -s 00:04:59.246 05:20:02 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:04:59.246 05:20:02 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:04:59.246 05:20:02 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:04:59.246 05:20:02 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:04:59.246 05:20:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:59.246 05:20:02 -- common/autotest_common.sh@10 -- # set +x 00:04:59.246 05:20:02 -- json_config/json_config.sh@376 -- # killprocess 1602003 00:04:59.246 05:20:02 -- common/autotest_common.sh@936 -- # '[' -z 1602003 ']' 00:04:59.246 05:20:02 -- common/autotest_common.sh@940 -- # kill -0 1602003 00:04:59.246 05:20:02 -- common/autotest_common.sh@941 -- # uname 00:04:59.246 05:20:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:59.246 05:20:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1602003 00:04:59.246 05:20:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:59.246 05:20:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:59.246 05:20:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1602003' 00:04:59.246 killing process with pid 1602003 00:04:59.246 05:20:02 -- common/autotest_common.sh@955 -- # kill 1602003 00:04:59.246 05:20:02 -- common/autotest_common.sh@960 -- # wait 1602003 00:04:59.505 05:20:02 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:59.506 05:20:02 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:04:59.506 05:20:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:59.506 05:20:02 -- common/autotest_common.sh@10 -- # set +x 00:04:59.506 05:20:02 -- json_config/json_config.sh@381 -- # return 0 00:04:59.506 05:20:02 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:04:59.506 INFO: Success 00:04:59.506 00:04:59.506 real 0m7.877s 00:04:59.506 user 0m9.612s 00:04:59.506 sys 0m1.942s 00:04:59.506 05:20:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:59.506 05:20:02 -- common/autotest_common.sh@10 -- # set +x 00:04:59.506 ************************************ 00:04:59.506 END TEST json_config 00:04:59.506 ************************************ 00:04:59.506 05:20:02 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:59.506 05:20:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.506 05:20:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.506 05:20:02 -- common/autotest_common.sh@10 -- # set +x 00:04:59.506 ************************************ 00:04:59.506 START TEST json_config_extra_key 00:04:59.506 ************************************ 00:04:59.506 05:20:02 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:59.767 05:20:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:59.767 05:20:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:59.767 05:20:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:59.767 05:20:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:59.767 05:20:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:59.767 05:20:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:59.767 05:20:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:59.767 05:20:02 -- scripts/common.sh@335 -- # IFS=.-: 00:04:59.767 05:20:02 -- scripts/common.sh@335 -- # read -ra ver1 00:04:59.767 05:20:02 -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.767 05:20:02 -- scripts/common.sh@336 -- # read -ra ver2 00:04:59.767 05:20:02 -- scripts/common.sh@337 -- # local 'op=<' 00:04:59.767 05:20:02 -- scripts/common.sh@339 -- # ver1_l=2 00:04:59.767 05:20:02 -- scripts/common.sh@340 -- # ver2_l=1 00:04:59.767 05:20:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:59.767 05:20:02 -- scripts/common.sh@343 -- # case "$op" in 00:04:59.767 05:20:02 -- scripts/common.sh@344 -- # : 1 00:04:59.767 05:20:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:59.767 05:20:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.767 05:20:02 -- scripts/common.sh@364 -- # decimal 1 00:04:59.767 05:20:02 -- scripts/common.sh@352 -- # local d=1 00:04:59.767 05:20:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.767 05:20:02 -- scripts/common.sh@354 -- # echo 1 00:04:59.767 05:20:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:59.767 05:20:02 -- scripts/common.sh@365 -- # decimal 2 00:04:59.767 05:20:02 -- scripts/common.sh@352 -- # local d=2 00:04:59.767 05:20:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.767 05:20:02 -- scripts/common.sh@354 -- # echo 2 00:04:59.767 05:20:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:59.767 05:20:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:59.767 05:20:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:59.767 05:20:02 -- scripts/common.sh@367 -- # return 0 00:04:59.767 05:20:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.768 05:20:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:59.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.768 --rc genhtml_branch_coverage=1 00:04:59.768 --rc genhtml_function_coverage=1 00:04:59.768 --rc genhtml_legend=1 00:04:59.768 --rc geninfo_all_blocks=1 00:04:59.768 --rc geninfo_unexecuted_blocks=1 00:04:59.768 00:04:59.768 ' 00:04:59.768 05:20:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:59.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.768 --rc genhtml_branch_coverage=1 00:04:59.768 --rc genhtml_function_coverage=1 00:04:59.768 --rc genhtml_legend=1 00:04:59.768 --rc geninfo_all_blocks=1 00:04:59.768 --rc geninfo_unexecuted_blocks=1 00:04:59.768 00:04:59.768 ' 00:04:59.768 05:20:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:59.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.768 --rc genhtml_branch_coverage=1 00:04:59.768 --rc genhtml_function_coverage=1 00:04:59.768 --rc genhtml_legend=1 00:04:59.768 --rc geninfo_all_blocks=1 00:04:59.768 --rc geninfo_unexecuted_blocks=1 00:04:59.768 00:04:59.768 ' 00:04:59.768 05:20:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:59.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.768 --rc genhtml_branch_coverage=1 00:04:59.768 --rc genhtml_function_coverage=1 00:04:59.768 --rc genhtml_legend=1 00:04:59.768 --rc geninfo_all_blocks=1 00:04:59.768 --rc geninfo_unexecuted_blocks=1 00:04:59.768 00:04:59.768 ' 00:04:59.768 05:20:02 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:59.768 05:20:02 -- nvmf/common.sh@7 -- # uname -s 00:04:59.768 05:20:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:59.768 05:20:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:59.768 05:20:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:59.768 05:20:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:59.768 05:20:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:59.768 05:20:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:59.768 05:20:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:59.768 05:20:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:59.768 05:20:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:59.768 05:20:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:59.768 05:20:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:59.768 05:20:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:59.768 05:20:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:59.768 05:20:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:59.768 05:20:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:59.768 05:20:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:59.768 05:20:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:59.768 05:20:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.768 05:20:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.768 05:20:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.768 05:20:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.768 05:20:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.768 05:20:02 -- paths/export.sh@5 -- # export PATH 00:04:59.768 05:20:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.768 05:20:02 -- nvmf/common.sh@46 -- # : 0 00:04:59.768 05:20:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:59.768 05:20:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:59.768 05:20:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:59.768 05:20:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:59.768 05:20:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:59.768 05:20:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:59.768 05:20:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:59.768 05:20:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:59.768 05:20:02 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:59.768 05:20:02 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:59.768 05:20:02 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:59.768 05:20:02 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:59.768 05:20:02 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:59.768 05:20:02 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:59.768 05:20:02 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:59.768 05:20:02 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:59.768 05:20:02 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:59.768 05:20:02 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:59.768 INFO: launching applications... 00:04:59.768 05:20:02 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:59.768 05:20:02 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:59.768 05:20:02 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:59.768 05:20:02 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:59.768 05:20:02 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:59.768 05:20:02 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=1602620 00:04:59.768 05:20:02 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:59.768 Waiting for target to run... 00:04:59.768 05:20:02 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 1602620 /var/tmp/spdk_tgt.sock 00:04:59.768 05:20:02 -- common/autotest_common.sh@829 -- # '[' -z 1602620 ']' 00:04:59.768 05:20:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:59.768 05:20:02 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:59.768 05:20:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.768 05:20:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:59.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:59.768 05:20:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.768 05:20:02 -- common/autotest_common.sh@10 -- # set +x 00:04:59.768 [2024-12-07 05:20:02.994483] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:59.769 [2024-12-07 05:20:02.994565] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1602620 ] 00:05:00.029 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.289 [2024-12-07 05:20:03.269269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.289 [2024-12-07 05:20:03.313458] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:00.289 [2024-12-07 05:20:03.313560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.549 05:20:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:00.549 05:20:03 -- common/autotest_common.sh@862 -- # return 0 00:05:00.549 05:20:03 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:00.549 00:05:00.549 05:20:03 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:00.549 INFO: shutting down applications... 00:05:00.549 05:20:03 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:00.549 05:20:03 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:00.549 05:20:03 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:00.549 05:20:03 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 1602620 ]] 00:05:00.549 05:20:03 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 1602620 00:05:00.549 05:20:03 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:00.549 05:20:03 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:00.549 05:20:03 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1602620 00:05:00.549 05:20:03 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:01.120 05:20:04 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:01.120 05:20:04 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:01.120 05:20:04 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1602620 00:05:01.120 05:20:04 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:01.120 05:20:04 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:01.120 05:20:04 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:01.120 05:20:04 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:01.120 SPDK target shutdown done 00:05:01.120 05:20:04 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:01.120 Success 00:05:01.120 00:05:01.120 real 0m1.545s 00:05:01.120 user 0m1.167s 00:05:01.120 sys 0m0.389s 00:05:01.120 05:20:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:01.120 05:20:04 -- common/autotest_common.sh@10 -- # set +x 00:05:01.120 ************************************ 00:05:01.120 END TEST json_config_extra_key 00:05:01.120 ************************************ 00:05:01.120 05:20:04 -- spdk/autotest.sh@167 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:01.120 05:20:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:01.120 05:20:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:01.120 05:20:04 -- common/autotest_common.sh@10 -- # set +x 00:05:01.120 ************************************ 00:05:01.120 START TEST alias_rpc 00:05:01.120 ************************************ 00:05:01.121 05:20:04 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:01.382 * Looking for test storage... 00:05:01.382 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:01.382 05:20:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:01.382 05:20:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:01.382 05:20:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:01.382 05:20:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:01.382 05:20:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:01.382 05:20:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:01.382 05:20:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:01.382 05:20:04 -- scripts/common.sh@335 -- # IFS=.-: 00:05:01.382 05:20:04 -- scripts/common.sh@335 -- # read -ra ver1 00:05:01.382 05:20:04 -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.382 05:20:04 -- scripts/common.sh@336 -- # read -ra ver2 00:05:01.382 05:20:04 -- scripts/common.sh@337 -- # local 'op=<' 00:05:01.382 05:20:04 -- scripts/common.sh@339 -- # ver1_l=2 00:05:01.382 05:20:04 -- scripts/common.sh@340 -- # ver2_l=1 00:05:01.382 05:20:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:01.382 05:20:04 -- scripts/common.sh@343 -- # case "$op" in 00:05:01.382 05:20:04 -- scripts/common.sh@344 -- # : 1 00:05:01.382 05:20:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:01.382 05:20:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.382 05:20:04 -- scripts/common.sh@364 -- # decimal 1 00:05:01.382 05:20:04 -- scripts/common.sh@352 -- # local d=1 00:05:01.382 05:20:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.382 05:20:04 -- scripts/common.sh@354 -- # echo 1 00:05:01.382 05:20:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:01.382 05:20:04 -- scripts/common.sh@365 -- # decimal 2 00:05:01.382 05:20:04 -- scripts/common.sh@352 -- # local d=2 00:05:01.382 05:20:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.382 05:20:04 -- scripts/common.sh@354 -- # echo 2 00:05:01.382 05:20:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:01.382 05:20:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:01.382 05:20:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:01.382 05:20:04 -- scripts/common.sh@367 -- # return 0 00:05:01.382 05:20:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.382 05:20:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:01.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.382 --rc genhtml_branch_coverage=1 00:05:01.382 --rc genhtml_function_coverage=1 00:05:01.382 --rc genhtml_legend=1 00:05:01.382 --rc geninfo_all_blocks=1 00:05:01.382 --rc geninfo_unexecuted_blocks=1 00:05:01.382 00:05:01.382 ' 00:05:01.382 05:20:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:01.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.382 --rc genhtml_branch_coverage=1 00:05:01.382 --rc genhtml_function_coverage=1 00:05:01.382 --rc genhtml_legend=1 00:05:01.382 --rc geninfo_all_blocks=1 00:05:01.382 --rc geninfo_unexecuted_blocks=1 00:05:01.382 00:05:01.382 ' 00:05:01.382 05:20:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:01.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.382 --rc genhtml_branch_coverage=1 00:05:01.382 --rc genhtml_function_coverage=1 00:05:01.382 --rc genhtml_legend=1 00:05:01.382 --rc geninfo_all_blocks=1 00:05:01.382 --rc geninfo_unexecuted_blocks=1 00:05:01.382 00:05:01.382 ' 00:05:01.382 05:20:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:01.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.382 --rc genhtml_branch_coverage=1 00:05:01.382 --rc genhtml_function_coverage=1 00:05:01.382 --rc genhtml_legend=1 00:05:01.382 --rc geninfo_all_blocks=1 00:05:01.382 --rc geninfo_unexecuted_blocks=1 00:05:01.382 00:05:01.382 ' 00:05:01.382 05:20:04 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:01.382 05:20:04 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1602977 00:05:01.382 05:20:04 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1602977 00:05:01.382 05:20:04 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:01.382 05:20:04 -- common/autotest_common.sh@829 -- # '[' -z 1602977 ']' 00:05:01.382 05:20:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.382 05:20:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:01.382 05:20:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.382 05:20:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:01.382 05:20:04 -- common/autotest_common.sh@10 -- # set +x 00:05:01.382 [2024-12-07 05:20:04.566099] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:01.382 [2024-12-07 05:20:04.566154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1602977 ] 00:05:01.382 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.643 [2024-12-07 05:20:04.644142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.643 [2024-12-07 05:20:04.700082] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:01.643 [2024-12-07 05:20:04.700187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.214 05:20:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:02.214 05:20:05 -- common/autotest_common.sh@862 -- # return 0 00:05:02.214 05:20:05 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:02.474 05:20:05 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1602977 00:05:02.474 05:20:05 -- common/autotest_common.sh@936 -- # '[' -z 1602977 ']' 00:05:02.474 05:20:05 -- common/autotest_common.sh@940 -- # kill -0 1602977 00:05:02.474 05:20:05 -- common/autotest_common.sh@941 -- # uname 00:05:02.474 05:20:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:02.474 05:20:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1602977 00:05:02.474 05:20:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:02.474 05:20:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:02.474 05:20:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1602977' 00:05:02.474 killing process with pid 1602977 00:05:02.474 05:20:05 -- common/autotest_common.sh@955 -- # kill 1602977 00:05:02.474 05:20:05 -- common/autotest_common.sh@960 -- # wait 1602977 00:05:02.734 00:05:02.734 real 0m1.458s 00:05:02.734 user 0m1.600s 00:05:02.734 sys 0m0.392s 00:05:02.734 05:20:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:02.734 05:20:05 -- common/autotest_common.sh@10 -- # set +x 00:05:02.734 ************************************ 00:05:02.734 END TEST alias_rpc 00:05:02.734 ************************************ 00:05:02.734 05:20:05 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:05:02.734 05:20:05 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:02.734 05:20:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:02.734 05:20:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.734 05:20:05 -- common/autotest_common.sh@10 -- # set +x 00:05:02.734 ************************************ 00:05:02.734 START TEST spdkcli_tcp 00:05:02.734 ************************************ 00:05:02.734 05:20:05 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:02.734 * Looking for test storage... 00:05:02.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:02.734 05:20:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:02.734 05:20:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:02.734 05:20:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:02.995 05:20:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:02.995 05:20:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:02.995 05:20:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:02.995 05:20:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:02.995 05:20:06 -- scripts/common.sh@335 -- # IFS=.-: 00:05:02.995 05:20:06 -- scripts/common.sh@335 -- # read -ra ver1 00:05:02.995 05:20:06 -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.995 05:20:06 -- scripts/common.sh@336 -- # read -ra ver2 00:05:02.995 05:20:06 -- scripts/common.sh@337 -- # local 'op=<' 00:05:02.995 05:20:06 -- scripts/common.sh@339 -- # ver1_l=2 00:05:02.995 05:20:06 -- scripts/common.sh@340 -- # ver2_l=1 00:05:02.995 05:20:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:02.995 05:20:06 -- scripts/common.sh@343 -- # case "$op" in 00:05:02.995 05:20:06 -- scripts/common.sh@344 -- # : 1 00:05:02.995 05:20:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:02.995 05:20:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.995 05:20:06 -- scripts/common.sh@364 -- # decimal 1 00:05:02.995 05:20:06 -- scripts/common.sh@352 -- # local d=1 00:05:02.995 05:20:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.995 05:20:06 -- scripts/common.sh@354 -- # echo 1 00:05:02.995 05:20:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:02.995 05:20:06 -- scripts/common.sh@365 -- # decimal 2 00:05:02.995 05:20:06 -- scripts/common.sh@352 -- # local d=2 00:05:02.995 05:20:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.995 05:20:06 -- scripts/common.sh@354 -- # echo 2 00:05:02.995 05:20:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:02.995 05:20:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:02.995 05:20:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:02.995 05:20:06 -- scripts/common.sh@367 -- # return 0 00:05:02.995 05:20:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.995 05:20:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:02.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.995 --rc genhtml_branch_coverage=1 00:05:02.995 --rc genhtml_function_coverage=1 00:05:02.995 --rc genhtml_legend=1 00:05:02.995 --rc geninfo_all_blocks=1 00:05:02.995 --rc geninfo_unexecuted_blocks=1 00:05:02.995 00:05:02.995 ' 00:05:02.995 05:20:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:02.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.995 --rc genhtml_branch_coverage=1 00:05:02.995 --rc genhtml_function_coverage=1 00:05:02.995 --rc genhtml_legend=1 00:05:02.995 --rc geninfo_all_blocks=1 00:05:02.995 --rc geninfo_unexecuted_blocks=1 00:05:02.995 00:05:02.995 ' 00:05:02.995 05:20:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:02.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.995 --rc genhtml_branch_coverage=1 00:05:02.995 --rc genhtml_function_coverage=1 00:05:02.995 --rc genhtml_legend=1 00:05:02.995 --rc geninfo_all_blocks=1 00:05:02.995 --rc geninfo_unexecuted_blocks=1 00:05:02.995 00:05:02.995 ' 00:05:02.995 05:20:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:02.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.995 --rc genhtml_branch_coverage=1 00:05:02.995 --rc genhtml_function_coverage=1 00:05:02.995 --rc genhtml_legend=1 00:05:02.995 --rc geninfo_all_blocks=1 00:05:02.995 --rc geninfo_unexecuted_blocks=1 00:05:02.995 00:05:02.995 ' 00:05:02.995 05:20:06 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:02.995 05:20:06 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:02.995 05:20:06 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:02.995 05:20:06 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:02.995 05:20:06 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:02.995 05:20:06 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:02.995 05:20:06 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:02.995 05:20:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:02.996 05:20:06 -- common/autotest_common.sh@10 -- # set +x 00:05:02.996 05:20:06 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1603373 00:05:02.996 05:20:06 -- spdkcli/tcp.sh@27 -- # waitforlisten 1603373 00:05:02.996 05:20:06 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:02.996 05:20:06 -- common/autotest_common.sh@829 -- # '[' -z 1603373 ']' 00:05:02.996 05:20:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.996 05:20:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:02.996 05:20:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.996 05:20:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:02.996 05:20:06 -- common/autotest_common.sh@10 -- # set +x 00:05:02.996 [2024-12-07 05:20:06.080727] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:02.996 [2024-12-07 05:20:06.080782] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1603373 ] 00:05:02.996 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.996 [2024-12-07 05:20:06.156833] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.996 [2024-12-07 05:20:06.214119] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:02.996 [2024-12-07 05:20:06.214373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.996 [2024-12-07 05:20:06.214373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.940 05:20:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:03.940 05:20:06 -- common/autotest_common.sh@862 -- # return 0 00:05:03.940 05:20:06 -- spdkcli/tcp.sh@31 -- # socat_pid=1603710 00:05:03.940 05:20:06 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:03.940 05:20:06 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:03.940 [ 00:05:03.940 "bdev_malloc_delete", 00:05:03.940 "bdev_malloc_create", 00:05:03.940 "bdev_null_resize", 00:05:03.940 "bdev_null_delete", 00:05:03.940 "bdev_null_create", 00:05:03.940 "bdev_nvme_cuse_unregister", 00:05:03.940 "bdev_nvme_cuse_register", 00:05:03.940 "bdev_opal_new_user", 00:05:03.940 "bdev_opal_set_lock_state", 00:05:03.940 "bdev_opal_delete", 00:05:03.940 "bdev_opal_get_info", 00:05:03.940 "bdev_opal_create", 00:05:03.940 "bdev_nvme_opal_revert", 00:05:03.940 "bdev_nvme_opal_init", 00:05:03.940 "bdev_nvme_send_cmd", 00:05:03.940 "bdev_nvme_get_path_iostat", 00:05:03.940 "bdev_nvme_get_mdns_discovery_info", 00:05:03.940 "bdev_nvme_stop_mdns_discovery", 00:05:03.940 "bdev_nvme_start_mdns_discovery", 00:05:03.940 "bdev_nvme_set_multipath_policy", 00:05:03.940 "bdev_nvme_set_preferred_path", 00:05:03.940 "bdev_nvme_get_io_paths", 00:05:03.940 "bdev_nvme_remove_error_injection", 00:05:03.940 "bdev_nvme_add_error_injection", 00:05:03.940 "bdev_nvme_get_discovery_info", 00:05:03.940 "bdev_nvme_stop_discovery", 00:05:03.940 "bdev_nvme_start_discovery", 00:05:03.940 "bdev_nvme_get_controller_health_info", 00:05:03.941 "bdev_nvme_disable_controller", 00:05:03.941 "bdev_nvme_enable_controller", 00:05:03.941 "bdev_nvme_reset_controller", 00:05:03.941 "bdev_nvme_get_transport_statistics", 00:05:03.941 "bdev_nvme_apply_firmware", 00:05:03.941 "bdev_nvme_detach_controller", 00:05:03.941 "bdev_nvme_get_controllers", 00:05:03.941 "bdev_nvme_attach_controller", 00:05:03.941 "bdev_nvme_set_hotplug", 00:05:03.941 "bdev_nvme_set_options", 00:05:03.941 "bdev_passthru_delete", 00:05:03.941 "bdev_passthru_create", 00:05:03.941 "bdev_lvol_grow_lvstore", 00:05:03.941 "bdev_lvol_get_lvols", 00:05:03.941 "bdev_lvol_get_lvstores", 00:05:03.941 "bdev_lvol_delete", 00:05:03.941 "bdev_lvol_set_read_only", 00:05:03.941 "bdev_lvol_resize", 00:05:03.941 "bdev_lvol_decouple_parent", 00:05:03.941 "bdev_lvol_inflate", 00:05:03.941 "bdev_lvol_rename", 00:05:03.941 "bdev_lvol_clone_bdev", 00:05:03.941 "bdev_lvol_clone", 00:05:03.941 "bdev_lvol_snapshot", 00:05:03.941 "bdev_lvol_create", 00:05:03.941 "bdev_lvol_delete_lvstore", 00:05:03.941 "bdev_lvol_rename_lvstore", 00:05:03.941 "bdev_lvol_create_lvstore", 00:05:03.941 "bdev_raid_set_options", 00:05:03.941 "bdev_raid_remove_base_bdev", 00:05:03.941 "bdev_raid_add_base_bdev", 00:05:03.941 "bdev_raid_delete", 00:05:03.941 "bdev_raid_create", 00:05:03.941 "bdev_raid_get_bdevs", 00:05:03.941 "bdev_error_inject_error", 00:05:03.941 "bdev_error_delete", 00:05:03.941 "bdev_error_create", 00:05:03.941 "bdev_split_delete", 00:05:03.941 "bdev_split_create", 00:05:03.941 "bdev_delay_delete", 00:05:03.941 "bdev_delay_create", 00:05:03.941 "bdev_delay_update_latency", 00:05:03.941 "bdev_zone_block_delete", 00:05:03.941 "bdev_zone_block_create", 00:05:03.941 "blobfs_create", 00:05:03.941 "blobfs_detect", 00:05:03.941 "blobfs_set_cache_size", 00:05:03.941 "bdev_aio_delete", 00:05:03.941 "bdev_aio_rescan", 00:05:03.941 "bdev_aio_create", 00:05:03.941 "bdev_ftl_set_property", 00:05:03.941 "bdev_ftl_get_properties", 00:05:03.941 "bdev_ftl_get_stats", 00:05:03.941 "bdev_ftl_unmap", 00:05:03.941 "bdev_ftl_unload", 00:05:03.941 "bdev_ftl_delete", 00:05:03.941 "bdev_ftl_load", 00:05:03.941 "bdev_ftl_create", 00:05:03.941 "bdev_virtio_attach_controller", 00:05:03.941 "bdev_virtio_scsi_get_devices", 00:05:03.941 "bdev_virtio_detach_controller", 00:05:03.941 "bdev_virtio_blk_set_hotplug", 00:05:03.941 "bdev_iscsi_delete", 00:05:03.941 "bdev_iscsi_create", 00:05:03.941 "bdev_iscsi_set_options", 00:05:03.941 "accel_error_inject_error", 00:05:03.941 "ioat_scan_accel_module", 00:05:03.941 "dsa_scan_accel_module", 00:05:03.941 "iaa_scan_accel_module", 00:05:03.941 "iscsi_set_options", 00:05:03.941 "iscsi_get_auth_groups", 00:05:03.941 "iscsi_auth_group_remove_secret", 00:05:03.941 "iscsi_auth_group_add_secret", 00:05:03.941 "iscsi_delete_auth_group", 00:05:03.941 "iscsi_create_auth_group", 00:05:03.941 "iscsi_set_discovery_auth", 00:05:03.941 "iscsi_get_options", 00:05:03.941 "iscsi_target_node_request_logout", 00:05:03.941 "iscsi_target_node_set_redirect", 00:05:03.941 "iscsi_target_node_set_auth", 00:05:03.941 "iscsi_target_node_add_lun", 00:05:03.941 "iscsi_get_connections", 00:05:03.941 "iscsi_portal_group_set_auth", 00:05:03.941 "iscsi_start_portal_group", 00:05:03.941 "iscsi_delete_portal_group", 00:05:03.941 "iscsi_create_portal_group", 00:05:03.941 "iscsi_get_portal_groups", 00:05:03.941 "iscsi_delete_target_node", 00:05:03.941 "iscsi_target_node_remove_pg_ig_maps", 00:05:03.941 "iscsi_target_node_add_pg_ig_maps", 00:05:03.941 "iscsi_create_target_node", 00:05:03.941 "iscsi_get_target_nodes", 00:05:03.941 "iscsi_delete_initiator_group", 00:05:03.941 "iscsi_initiator_group_remove_initiators", 00:05:03.941 "iscsi_initiator_group_add_initiators", 00:05:03.941 "iscsi_create_initiator_group", 00:05:03.941 "iscsi_get_initiator_groups", 00:05:03.941 "nvmf_set_crdt", 00:05:03.941 "nvmf_set_config", 00:05:03.941 "nvmf_set_max_subsystems", 00:05:03.941 "nvmf_subsystem_get_listeners", 00:05:03.941 "nvmf_subsystem_get_qpairs", 00:05:03.941 "nvmf_subsystem_get_controllers", 00:05:03.941 "nvmf_get_stats", 00:05:03.941 "nvmf_get_transports", 00:05:03.941 "nvmf_create_transport", 00:05:03.941 "nvmf_get_targets", 00:05:03.941 "nvmf_delete_target", 00:05:03.941 "nvmf_create_target", 00:05:03.941 "nvmf_subsystem_allow_any_host", 00:05:03.941 "nvmf_subsystem_remove_host", 00:05:03.941 "nvmf_subsystem_add_host", 00:05:03.941 "nvmf_subsystem_remove_ns", 00:05:03.941 "nvmf_subsystem_add_ns", 00:05:03.941 "nvmf_subsystem_listener_set_ana_state", 00:05:03.941 "nvmf_discovery_get_referrals", 00:05:03.941 "nvmf_discovery_remove_referral", 00:05:03.941 "nvmf_discovery_add_referral", 00:05:03.941 "nvmf_subsystem_remove_listener", 00:05:03.941 "nvmf_subsystem_add_listener", 00:05:03.941 "nvmf_delete_subsystem", 00:05:03.941 "nvmf_create_subsystem", 00:05:03.941 "nvmf_get_subsystems", 00:05:03.941 "env_dpdk_get_mem_stats", 00:05:03.941 "nbd_get_disks", 00:05:03.941 "nbd_stop_disk", 00:05:03.941 "nbd_start_disk", 00:05:03.941 "ublk_recover_disk", 00:05:03.941 "ublk_get_disks", 00:05:03.941 "ublk_stop_disk", 00:05:03.941 "ublk_start_disk", 00:05:03.941 "ublk_destroy_target", 00:05:03.941 "ublk_create_target", 00:05:03.941 "virtio_blk_create_transport", 00:05:03.941 "virtio_blk_get_transports", 00:05:03.941 "vhost_controller_set_coalescing", 00:05:03.941 "vhost_get_controllers", 00:05:03.941 "vhost_delete_controller", 00:05:03.941 "vhost_create_blk_controller", 00:05:03.941 "vhost_scsi_controller_remove_target", 00:05:03.941 "vhost_scsi_controller_add_target", 00:05:03.941 "vhost_start_scsi_controller", 00:05:03.941 "vhost_create_scsi_controller", 00:05:03.941 "thread_set_cpumask", 00:05:03.941 "framework_get_scheduler", 00:05:03.941 "framework_set_scheduler", 00:05:03.941 "framework_get_reactors", 00:05:03.941 "thread_get_io_channels", 00:05:03.941 "thread_get_pollers", 00:05:03.941 "thread_get_stats", 00:05:03.941 "framework_monitor_context_switch", 00:05:03.941 "spdk_kill_instance", 00:05:03.941 "log_enable_timestamps", 00:05:03.941 "log_get_flags", 00:05:03.941 "log_clear_flag", 00:05:03.941 "log_set_flag", 00:05:03.941 "log_get_level", 00:05:03.941 "log_set_level", 00:05:03.941 "log_get_print_level", 00:05:03.941 "log_set_print_level", 00:05:03.941 "framework_enable_cpumask_locks", 00:05:03.941 "framework_disable_cpumask_locks", 00:05:03.941 "framework_wait_init", 00:05:03.941 "framework_start_init", 00:05:03.941 "scsi_get_devices", 00:05:03.941 "bdev_get_histogram", 00:05:03.941 "bdev_enable_histogram", 00:05:03.941 "bdev_set_qos_limit", 00:05:03.941 "bdev_set_qd_sampling_period", 00:05:03.941 "bdev_get_bdevs", 00:05:03.941 "bdev_reset_iostat", 00:05:03.941 "bdev_get_iostat", 00:05:03.941 "bdev_examine", 00:05:03.941 "bdev_wait_for_examine", 00:05:03.941 "bdev_set_options", 00:05:03.941 "notify_get_notifications", 00:05:03.941 "notify_get_types", 00:05:03.941 "accel_get_stats", 00:05:03.941 "accel_set_options", 00:05:03.941 "accel_set_driver", 00:05:03.941 "accel_crypto_key_destroy", 00:05:03.941 "accel_crypto_keys_get", 00:05:03.941 "accel_crypto_key_create", 00:05:03.941 "accel_assign_opc", 00:05:03.941 "accel_get_module_info", 00:05:03.941 "accel_get_opc_assignments", 00:05:03.941 "vmd_rescan", 00:05:03.941 "vmd_remove_device", 00:05:03.941 "vmd_enable", 00:05:03.941 "sock_set_default_impl", 00:05:03.941 "sock_impl_set_options", 00:05:03.941 "sock_impl_get_options", 00:05:03.941 "iobuf_get_stats", 00:05:03.941 "iobuf_set_options", 00:05:03.941 "framework_get_pci_devices", 00:05:03.941 "framework_get_config", 00:05:03.941 "framework_get_subsystems", 00:05:03.941 "trace_get_info", 00:05:03.941 "trace_get_tpoint_group_mask", 00:05:03.941 "trace_disable_tpoint_group", 00:05:03.941 "trace_enable_tpoint_group", 00:05:03.941 "trace_clear_tpoint_mask", 00:05:03.941 "trace_set_tpoint_mask", 00:05:03.941 "spdk_get_version", 00:05:03.941 "rpc_get_methods" 00:05:03.941 ] 00:05:03.941 05:20:07 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:03.941 05:20:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:03.941 05:20:07 -- common/autotest_common.sh@10 -- # set +x 00:05:03.941 05:20:07 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:03.941 05:20:07 -- spdkcli/tcp.sh@38 -- # killprocess 1603373 00:05:03.941 05:20:07 -- common/autotest_common.sh@936 -- # '[' -z 1603373 ']' 00:05:03.941 05:20:07 -- common/autotest_common.sh@940 -- # kill -0 1603373 00:05:03.941 05:20:07 -- common/autotest_common.sh@941 -- # uname 00:05:03.941 05:20:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:03.941 05:20:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1603373 00:05:03.941 05:20:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:03.941 05:20:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:03.941 05:20:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1603373' 00:05:03.941 killing process with pid 1603373 00:05:03.941 05:20:07 -- common/autotest_common.sh@955 -- # kill 1603373 00:05:03.941 05:20:07 -- common/autotest_common.sh@960 -- # wait 1603373 00:05:04.203 00:05:04.203 real 0m1.495s 00:05:04.203 user 0m2.718s 00:05:04.203 sys 0m0.438s 00:05:04.203 05:20:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:04.203 05:20:07 -- common/autotest_common.sh@10 -- # set +x 00:05:04.203 ************************************ 00:05:04.203 END TEST spdkcli_tcp 00:05:04.203 ************************************ 00:05:04.203 05:20:07 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:04.203 05:20:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:04.203 05:20:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:04.203 05:20:07 -- common/autotest_common.sh@10 -- # set +x 00:05:04.203 ************************************ 00:05:04.203 START TEST dpdk_mem_utility 00:05:04.203 ************************************ 00:05:04.203 05:20:07 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:04.464 * Looking for test storage... 00:05:04.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:04.464 05:20:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:04.464 05:20:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:04.465 05:20:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:04.465 05:20:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:04.465 05:20:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:04.465 05:20:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:04.465 05:20:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:04.465 05:20:07 -- scripts/common.sh@335 -- # IFS=.-: 00:05:04.465 05:20:07 -- scripts/common.sh@335 -- # read -ra ver1 00:05:04.465 05:20:07 -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.465 05:20:07 -- scripts/common.sh@336 -- # read -ra ver2 00:05:04.465 05:20:07 -- scripts/common.sh@337 -- # local 'op=<' 00:05:04.465 05:20:07 -- scripts/common.sh@339 -- # ver1_l=2 00:05:04.465 05:20:07 -- scripts/common.sh@340 -- # ver2_l=1 00:05:04.465 05:20:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:04.465 05:20:07 -- scripts/common.sh@343 -- # case "$op" in 00:05:04.465 05:20:07 -- scripts/common.sh@344 -- # : 1 00:05:04.465 05:20:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:04.465 05:20:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.465 05:20:07 -- scripts/common.sh@364 -- # decimal 1 00:05:04.465 05:20:07 -- scripts/common.sh@352 -- # local d=1 00:05:04.465 05:20:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.465 05:20:07 -- scripts/common.sh@354 -- # echo 1 00:05:04.465 05:20:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:04.465 05:20:07 -- scripts/common.sh@365 -- # decimal 2 00:05:04.465 05:20:07 -- scripts/common.sh@352 -- # local d=2 00:05:04.465 05:20:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.465 05:20:07 -- scripts/common.sh@354 -- # echo 2 00:05:04.465 05:20:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:04.465 05:20:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:04.465 05:20:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:04.465 05:20:07 -- scripts/common.sh@367 -- # return 0 00:05:04.465 05:20:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.465 05:20:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:04.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.465 --rc genhtml_branch_coverage=1 00:05:04.465 --rc genhtml_function_coverage=1 00:05:04.465 --rc genhtml_legend=1 00:05:04.465 --rc geninfo_all_blocks=1 00:05:04.465 --rc geninfo_unexecuted_blocks=1 00:05:04.465 00:05:04.465 ' 00:05:04.465 05:20:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:04.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.465 --rc genhtml_branch_coverage=1 00:05:04.465 --rc genhtml_function_coverage=1 00:05:04.465 --rc genhtml_legend=1 00:05:04.465 --rc geninfo_all_blocks=1 00:05:04.465 --rc geninfo_unexecuted_blocks=1 00:05:04.465 00:05:04.465 ' 00:05:04.465 05:20:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:04.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.465 --rc genhtml_branch_coverage=1 00:05:04.465 --rc genhtml_function_coverage=1 00:05:04.465 --rc genhtml_legend=1 00:05:04.465 --rc geninfo_all_blocks=1 00:05:04.465 --rc geninfo_unexecuted_blocks=1 00:05:04.465 00:05:04.465 ' 00:05:04.465 05:20:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:04.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.465 --rc genhtml_branch_coverage=1 00:05:04.465 --rc genhtml_function_coverage=1 00:05:04.465 --rc genhtml_legend=1 00:05:04.465 --rc geninfo_all_blocks=1 00:05:04.465 --rc geninfo_unexecuted_blocks=1 00:05:04.465 00:05:04.465 ' 00:05:04.465 05:20:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:04.465 05:20:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1603783 00:05:04.465 05:20:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1603783 00:05:04.465 05:20:07 -- common/autotest_common.sh@829 -- # '[' -z 1603783 ']' 00:05:04.465 05:20:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.465 05:20:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.465 05:20:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.465 05:20:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.465 05:20:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.465 05:20:07 -- common/autotest_common.sh@10 -- # set +x 00:05:04.465 [2024-12-07 05:20:07.614415] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:04.465 [2024-12-07 05:20:07.614496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1603783 ] 00:05:04.465 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.465 [2024-12-07 05:20:07.695951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.725 [2024-12-07 05:20:07.755677] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:04.725 [2024-12-07 05:20:07.755782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.297 05:20:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:05.297 05:20:08 -- common/autotest_common.sh@862 -- # return 0 00:05:05.297 05:20:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:05.297 05:20:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:05.297 05:20:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.297 05:20:08 -- common/autotest_common.sh@10 -- # set +x 00:05:05.297 { 00:05:05.297 "filename": "/tmp/spdk_mem_dump.txt" 00:05:05.297 } 00:05:05.297 05:20:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.297 05:20:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:05.297 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:05.297 1 heaps totaling size 814.000000 MiB 00:05:05.297 size: 814.000000 MiB heap id: 0 00:05:05.297 end heaps---------- 00:05:05.297 8 mempools totaling size 598.116089 MiB 00:05:05.297 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:05.297 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:05.297 size: 84.521057 MiB name: bdev_io_1603783 00:05:05.297 size: 51.011292 MiB name: evtpool_1603783 00:05:05.297 size: 50.003479 MiB name: msgpool_1603783 00:05:05.297 size: 21.763794 MiB name: PDU_Pool 00:05:05.297 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:05.297 size: 0.026123 MiB name: Session_Pool 00:05:05.297 end mempools------- 00:05:05.297 6 memzones totaling size 4.142822 MiB 00:05:05.297 size: 1.000366 MiB name: RG_ring_0_1603783 00:05:05.297 size: 1.000366 MiB name: RG_ring_1_1603783 00:05:05.297 size: 1.000366 MiB name: RG_ring_4_1603783 00:05:05.297 size: 1.000366 MiB name: RG_ring_5_1603783 00:05:05.297 size: 0.125366 MiB name: RG_ring_2_1603783 00:05:05.297 size: 0.015991 MiB name: RG_ring_3_1603783 00:05:05.297 end memzones------- 00:05:05.297 05:20:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:05.297 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:05.297 list of free elements. size: 12.519348 MiB 00:05:05.297 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:05.297 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:05.297 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:05.297 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:05.297 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:05.297 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:05.297 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:05.297 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:05.297 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:05.297 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:05.297 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:05.297 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:05.297 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:05.297 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:05.297 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:05.297 list of standard malloc elements. size: 199.218079 MiB 00:05:05.297 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:05.297 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:05.297 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:05.297 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:05.297 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:05.297 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:05.297 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:05.297 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:05.297 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:05.297 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:05.297 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:05.297 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:05.297 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:05.297 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:05.297 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:05.297 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:05.297 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:05.297 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:05.297 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:05.297 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:05.297 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:05.297 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:05.297 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:05.297 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:05.297 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:05.297 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:05.297 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:05.297 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:05.297 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:05.297 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:05.297 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:05.297 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:05.297 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:05.297 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:05.297 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:05.297 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:05.297 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:05.297 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:05.297 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:05.297 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:05.297 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:05.297 list of memzone associated elements. size: 602.262573 MiB 00:05:05.297 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:05.297 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:05.297 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:05.297 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:05.297 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:05.297 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1603783_0 00:05:05.297 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:05.297 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1603783_0 00:05:05.297 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:05.297 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1603783_0 00:05:05.297 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:05.297 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:05.297 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:05.297 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:05.297 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:05.297 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1603783 00:05:05.297 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:05.297 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1603783 00:05:05.297 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:05.297 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1603783 00:05:05.297 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:05.297 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:05.297 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:05.297 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:05.297 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:05.297 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:05.297 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:05.297 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:05.297 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:05.297 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1603783 00:05:05.297 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:05.297 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1603783 00:05:05.297 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:05.297 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1603783 00:05:05.297 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:05.297 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1603783 00:05:05.297 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:05.298 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1603783 00:05:05.298 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:05.298 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:05.298 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:05.298 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:05.298 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:05.298 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:05.298 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:05.298 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1603783 00:05:05.298 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:05.298 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:05.298 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:05.298 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:05.298 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:05.298 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1603783 00:05:05.298 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:05.298 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:05.298 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:05.298 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1603783 00:05:05.298 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:05.298 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1603783 00:05:05.298 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:05.298 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:05.298 05:20:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:05.298 05:20:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1603783 00:05:05.298 05:20:08 -- common/autotest_common.sh@936 -- # '[' -z 1603783 ']' 00:05:05.298 05:20:08 -- common/autotest_common.sh@940 -- # kill -0 1603783 00:05:05.298 05:20:08 -- common/autotest_common.sh@941 -- # uname 00:05:05.298 05:20:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:05.298 05:20:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1603783 00:05:05.558 05:20:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:05.558 05:20:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:05.558 05:20:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1603783' 00:05:05.558 killing process with pid 1603783 00:05:05.558 05:20:08 -- common/autotest_common.sh@955 -- # kill 1603783 00:05:05.558 05:20:08 -- common/autotest_common.sh@960 -- # wait 1603783 00:05:05.558 00:05:05.558 real 0m1.364s 00:05:05.558 user 0m1.431s 00:05:05.558 sys 0m0.397s 00:05:05.558 05:20:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:05.558 05:20:08 -- common/autotest_common.sh@10 -- # set +x 00:05:05.558 ************************************ 00:05:05.558 END TEST dpdk_mem_utility 00:05:05.558 ************************************ 00:05:05.558 05:20:08 -- spdk/autotest.sh@174 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:05.558 05:20:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.558 05:20:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.558 05:20:08 -- common/autotest_common.sh@10 -- # set +x 00:05:05.558 ************************************ 00:05:05.558 START TEST event 00:05:05.558 ************************************ 00:05:05.558 05:20:08 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:05.818 * Looking for test storage... 00:05:05.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:05.818 05:20:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:05.818 05:20:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:05.818 05:20:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:05.818 05:20:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:05.818 05:20:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:05.818 05:20:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:05.818 05:20:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:05.818 05:20:08 -- scripts/common.sh@335 -- # IFS=.-: 00:05:05.818 05:20:08 -- scripts/common.sh@335 -- # read -ra ver1 00:05:05.818 05:20:08 -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.818 05:20:08 -- scripts/common.sh@336 -- # read -ra ver2 00:05:05.818 05:20:08 -- scripts/common.sh@337 -- # local 'op=<' 00:05:05.818 05:20:08 -- scripts/common.sh@339 -- # ver1_l=2 00:05:05.818 05:20:08 -- scripts/common.sh@340 -- # ver2_l=1 00:05:05.818 05:20:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:05.818 05:20:08 -- scripts/common.sh@343 -- # case "$op" in 00:05:05.818 05:20:08 -- scripts/common.sh@344 -- # : 1 00:05:05.818 05:20:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:05.818 05:20:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.819 05:20:08 -- scripts/common.sh@364 -- # decimal 1 00:05:05.819 05:20:08 -- scripts/common.sh@352 -- # local d=1 00:05:05.819 05:20:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.819 05:20:08 -- scripts/common.sh@354 -- # echo 1 00:05:05.819 05:20:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:05.819 05:20:08 -- scripts/common.sh@365 -- # decimal 2 00:05:05.819 05:20:08 -- scripts/common.sh@352 -- # local d=2 00:05:05.819 05:20:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.819 05:20:08 -- scripts/common.sh@354 -- # echo 2 00:05:05.819 05:20:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:05.819 05:20:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:05.819 05:20:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:05.819 05:20:08 -- scripts/common.sh@367 -- # return 0 00:05:05.819 05:20:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.819 05:20:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:05.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.819 --rc genhtml_branch_coverage=1 00:05:05.819 --rc genhtml_function_coverage=1 00:05:05.819 --rc genhtml_legend=1 00:05:05.819 --rc geninfo_all_blocks=1 00:05:05.819 --rc geninfo_unexecuted_blocks=1 00:05:05.819 00:05:05.819 ' 00:05:05.819 05:20:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:05.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.819 --rc genhtml_branch_coverage=1 00:05:05.819 --rc genhtml_function_coverage=1 00:05:05.819 --rc genhtml_legend=1 00:05:05.819 --rc geninfo_all_blocks=1 00:05:05.819 --rc geninfo_unexecuted_blocks=1 00:05:05.819 00:05:05.819 ' 00:05:05.819 05:20:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:05.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.819 --rc genhtml_branch_coverage=1 00:05:05.819 --rc genhtml_function_coverage=1 00:05:05.819 --rc genhtml_legend=1 00:05:05.819 --rc geninfo_all_blocks=1 00:05:05.819 --rc geninfo_unexecuted_blocks=1 00:05:05.819 00:05:05.819 ' 00:05:05.819 05:20:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:05.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.819 --rc genhtml_branch_coverage=1 00:05:05.819 --rc genhtml_function_coverage=1 00:05:05.819 --rc genhtml_legend=1 00:05:05.819 --rc geninfo_all_blocks=1 00:05:05.819 --rc geninfo_unexecuted_blocks=1 00:05:05.819 00:05:05.819 ' 00:05:05.819 05:20:08 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:05.819 05:20:08 -- bdev/nbd_common.sh@6 -- # set -e 00:05:05.819 05:20:08 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:05.819 05:20:08 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:05.819 05:20:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.819 05:20:08 -- common/autotest_common.sh@10 -- # set +x 00:05:05.819 ************************************ 00:05:05.819 START TEST event_perf 00:05:05.819 ************************************ 00:05:05.819 05:20:08 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:05.819 Running I/O for 1 seconds...[2024-12-07 05:20:09.002743] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:05.819 [2024-12-07 05:20:09.002860] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1604188 ] 00:05:05.819 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.078 [2024-12-07 05:20:09.086974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:06.078 [2024-12-07 05:20:09.157213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.078 [2024-12-07 05:20:09.157368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:06.078 [2024-12-07 05:20:09.157521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.078 [2024-12-07 05:20:09.157521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:07.018 Running I/O for 1 seconds... 00:05:07.018 lcore 0: 175407 00:05:07.018 lcore 1: 175410 00:05:07.018 lcore 2: 175409 00:05:07.018 lcore 3: 175410 00:05:07.018 done. 00:05:07.018 00:05:07.018 real 0m1.221s 00:05:07.018 user 0m4.116s 00:05:07.018 sys 0m0.102s 00:05:07.018 05:20:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:07.018 05:20:10 -- common/autotest_common.sh@10 -- # set +x 00:05:07.018 ************************************ 00:05:07.018 END TEST event_perf 00:05:07.018 ************************************ 00:05:07.018 05:20:10 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:07.018 05:20:10 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:07.018 05:20:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.018 05:20:10 -- common/autotest_common.sh@10 -- # set +x 00:05:07.018 ************************************ 00:05:07.018 START TEST event_reactor 00:05:07.018 ************************************ 00:05:07.018 05:20:10 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:07.278 [2024-12-07 05:20:10.266405] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:07.278 [2024-12-07 05:20:10.266516] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1604542 ] 00:05:07.278 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.278 [2024-12-07 05:20:10.348961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.278 [2024-12-07 05:20:10.407683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.218 test_start 00:05:08.218 oneshot 00:05:08.218 tick 100 00:05:08.218 tick 100 00:05:08.218 tick 250 00:05:08.219 tick 100 00:05:08.219 tick 100 00:05:08.219 tick 100 00:05:08.219 tick 250 00:05:08.219 tick 500 00:05:08.219 tick 100 00:05:08.219 tick 100 00:05:08.219 tick 250 00:05:08.219 tick 100 00:05:08.219 tick 100 00:05:08.219 test_end 00:05:08.219 00:05:08.219 real 0m1.206s 00:05:08.219 user 0m1.115s 00:05:08.219 sys 0m0.087s 00:05:08.219 05:20:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:08.219 05:20:11 -- common/autotest_common.sh@10 -- # set +x 00:05:08.219 ************************************ 00:05:08.219 END TEST event_reactor 00:05:08.219 ************************************ 00:05:08.478 05:20:11 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:08.478 05:20:11 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:08.478 05:20:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.478 05:20:11 -- common/autotest_common.sh@10 -- # set +x 00:05:08.478 ************************************ 00:05:08.478 START TEST event_reactor_perf 00:05:08.478 ************************************ 00:05:08.478 05:20:11 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:08.479 [2024-12-07 05:20:11.518608] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:08.479 [2024-12-07 05:20:11.518712] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1604689 ] 00:05:08.479 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.479 [2024-12-07 05:20:11.600489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.479 [2024-12-07 05:20:11.658697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.862 test_start 00:05:09.862 test_end 00:05:09.862 Performance: 533596 events per second 00:05:09.862 00:05:09.862 real 0m1.205s 00:05:09.862 user 0m1.113s 00:05:09.862 sys 0m0.089s 00:05:09.862 05:20:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:09.862 05:20:12 -- common/autotest_common.sh@10 -- # set +x 00:05:09.862 ************************************ 00:05:09.862 END TEST event_reactor_perf 00:05:09.862 ************************************ 00:05:09.862 05:20:12 -- event/event.sh@49 -- # uname -s 00:05:09.862 05:20:12 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:09.862 05:20:12 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:09.862 05:20:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.862 05:20:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.862 05:20:12 -- common/autotest_common.sh@10 -- # set +x 00:05:09.862 ************************************ 00:05:09.862 START TEST event_scheduler 00:05:09.862 ************************************ 00:05:09.862 05:20:12 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:09.862 * Looking for test storage... 00:05:09.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:09.862 05:20:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:09.862 05:20:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:09.862 05:20:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:09.862 05:20:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:09.862 05:20:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:09.862 05:20:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:09.862 05:20:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:09.862 05:20:12 -- scripts/common.sh@335 -- # IFS=.-: 00:05:09.862 05:20:12 -- scripts/common.sh@335 -- # read -ra ver1 00:05:09.862 05:20:12 -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.862 05:20:12 -- scripts/common.sh@336 -- # read -ra ver2 00:05:09.862 05:20:12 -- scripts/common.sh@337 -- # local 'op=<' 00:05:09.862 05:20:12 -- scripts/common.sh@339 -- # ver1_l=2 00:05:09.862 05:20:12 -- scripts/common.sh@340 -- # ver2_l=1 00:05:09.862 05:20:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:09.862 05:20:12 -- scripts/common.sh@343 -- # case "$op" in 00:05:09.862 05:20:12 -- scripts/common.sh@344 -- # : 1 00:05:09.862 05:20:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:09.862 05:20:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.862 05:20:12 -- scripts/common.sh@364 -- # decimal 1 00:05:09.862 05:20:12 -- scripts/common.sh@352 -- # local d=1 00:05:09.862 05:20:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.862 05:20:12 -- scripts/common.sh@354 -- # echo 1 00:05:09.862 05:20:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:09.862 05:20:12 -- scripts/common.sh@365 -- # decimal 2 00:05:09.862 05:20:12 -- scripts/common.sh@352 -- # local d=2 00:05:09.862 05:20:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.862 05:20:12 -- scripts/common.sh@354 -- # echo 2 00:05:09.862 05:20:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:09.862 05:20:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:09.862 05:20:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:09.862 05:20:12 -- scripts/common.sh@367 -- # return 0 00:05:09.862 05:20:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.862 05:20:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:09.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.862 --rc genhtml_branch_coverage=1 00:05:09.862 --rc genhtml_function_coverage=1 00:05:09.862 --rc genhtml_legend=1 00:05:09.862 --rc geninfo_all_blocks=1 00:05:09.862 --rc geninfo_unexecuted_blocks=1 00:05:09.862 00:05:09.862 ' 00:05:09.862 05:20:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:09.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.862 --rc genhtml_branch_coverage=1 00:05:09.862 --rc genhtml_function_coverage=1 00:05:09.862 --rc genhtml_legend=1 00:05:09.862 --rc geninfo_all_blocks=1 00:05:09.862 --rc geninfo_unexecuted_blocks=1 00:05:09.862 00:05:09.862 ' 00:05:09.862 05:20:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:09.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.862 --rc genhtml_branch_coverage=1 00:05:09.862 --rc genhtml_function_coverage=1 00:05:09.862 --rc genhtml_legend=1 00:05:09.863 --rc geninfo_all_blocks=1 00:05:09.863 --rc geninfo_unexecuted_blocks=1 00:05:09.863 00:05:09.863 ' 00:05:09.863 05:20:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:09.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.863 --rc genhtml_branch_coverage=1 00:05:09.863 --rc genhtml_function_coverage=1 00:05:09.863 --rc genhtml_legend=1 00:05:09.863 --rc geninfo_all_blocks=1 00:05:09.863 --rc geninfo_unexecuted_blocks=1 00:05:09.863 00:05:09.863 ' 00:05:09.863 05:20:12 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:09.863 05:20:12 -- scheduler/scheduler.sh@35 -- # scheduler_pid=1604982 00:05:09.863 05:20:12 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.863 05:20:12 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:09.863 05:20:12 -- scheduler/scheduler.sh@37 -- # waitforlisten 1604982 00:05:09.863 05:20:12 -- common/autotest_common.sh@829 -- # '[' -z 1604982 ']' 00:05:09.863 05:20:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.863 05:20:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.863 05:20:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.863 05:20:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.863 05:20:12 -- common/autotest_common.sh@10 -- # set +x 00:05:09.863 [2024-12-07 05:20:12.990587] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:09.863 [2024-12-07 05:20:12.990670] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1604982 ] 00:05:09.863 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.863 [2024-12-07 05:20:13.073771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:10.124 [2024-12-07 05:20:13.166615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.124 [2024-12-07 05:20:13.166777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.124 [2024-12-07 05:20:13.166938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:10.124 [2024-12-07 05:20:13.166938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:10.724 05:20:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.724 05:20:13 -- common/autotest_common.sh@862 -- # return 0 00:05:10.724 05:20:13 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:10.724 05:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.724 05:20:13 -- common/autotest_common.sh@10 -- # set +x 00:05:10.724 POWER: Env isn't set yet! 00:05:10.724 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:10.724 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:10.724 POWER: Cannot set governor of lcore 0 to userspace 00:05:10.724 POWER: Attempting to initialise PSTAT power management... 00:05:10.724 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:10.724 POWER: Initialized successfully for lcore 0 power management 00:05:10.724 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:10.724 POWER: Initialized successfully for lcore 1 power management 00:05:10.724 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:10.724 POWER: Initialized successfully for lcore 2 power management 00:05:10.724 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:10.724 POWER: Initialized successfully for lcore 3 power management 00:05:10.724 [2024-12-07 05:20:13.831256] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:10.724 [2024-12-07 05:20:13.831268] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:10.724 [2024-12-07 05:20:13.831274] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:10.724 05:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.724 05:20:13 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:10.724 05:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.724 05:20:13 -- common/autotest_common.sh@10 -- # set +x 00:05:10.724 [2024-12-07 05:20:13.888849] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:10.724 05:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.724 05:20:13 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:10.724 05:20:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.724 05:20:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.724 05:20:13 -- common/autotest_common.sh@10 -- # set +x 00:05:10.724 ************************************ 00:05:10.724 START TEST scheduler_create_thread 00:05:10.724 ************************************ 00:05:10.724 05:20:13 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:05:10.724 05:20:13 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:10.724 05:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.724 05:20:13 -- common/autotest_common.sh@10 -- # set +x 00:05:10.724 2 00:05:10.724 05:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.724 05:20:13 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:10.724 05:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.724 05:20:13 -- common/autotest_common.sh@10 -- # set +x 00:05:10.724 3 00:05:10.724 05:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.724 05:20:13 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:10.724 05:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.724 05:20:13 -- common/autotest_common.sh@10 -- # set +x 00:05:10.724 4 00:05:10.724 05:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.724 05:20:13 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:10.724 05:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.724 05:20:13 -- common/autotest_common.sh@10 -- # set +x 00:05:10.986 5 00:05:10.986 05:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.986 05:20:13 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:10.986 05:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.986 05:20:13 -- common/autotest_common.sh@10 -- # set +x 00:05:10.986 6 00:05:10.986 05:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.986 05:20:13 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:10.986 05:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.986 05:20:13 -- common/autotest_common.sh@10 -- # set +x 00:05:10.986 7 00:05:10.986 05:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.986 05:20:13 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:10.986 05:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.986 05:20:13 -- common/autotest_common.sh@10 -- # set +x 00:05:10.986 8 00:05:10.986 05:20:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.986 05:20:14 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:10.986 05:20:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.986 05:20:14 -- common/autotest_common.sh@10 -- # set +x 00:05:11.928 9 00:05:11.928 05:20:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.928 05:20:14 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:11.928 05:20:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.928 05:20:14 -- common/autotest_common.sh@10 -- # set +x 00:05:12.868 10 00:05:12.868 05:20:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.868 05:20:16 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:12.868 05:20:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.868 05:20:16 -- common/autotest_common.sh@10 -- # set +x 00:05:13.810 05:20:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.810 05:20:16 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:13.810 05:20:16 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:13.810 05:20:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.810 05:20:16 -- common/autotest_common.sh@10 -- # set +x 00:05:14.379 05:20:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.380 05:20:17 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:14.380 05:20:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.380 05:20:17 -- common/autotest_common.sh@10 -- # set +x 00:05:15.317 05:20:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.317 05:20:18 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:15.317 05:20:18 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:15.318 05:20:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.318 05:20:18 -- common/autotest_common.sh@10 -- # set +x 00:05:15.888 05:20:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.888 00:05:15.888 real 0m4.968s 00:05:15.888 user 0m0.025s 00:05:15.888 sys 0m0.005s 00:05:15.888 05:20:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:15.888 05:20:18 -- common/autotest_common.sh@10 -- # set +x 00:05:15.888 ************************************ 00:05:15.888 END TEST scheduler_create_thread 00:05:15.888 ************************************ 00:05:15.888 05:20:18 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:15.888 05:20:18 -- scheduler/scheduler.sh@46 -- # killprocess 1604982 00:05:15.888 05:20:18 -- common/autotest_common.sh@936 -- # '[' -z 1604982 ']' 00:05:15.888 05:20:18 -- common/autotest_common.sh@940 -- # kill -0 1604982 00:05:15.888 05:20:18 -- common/autotest_common.sh@941 -- # uname 00:05:15.888 05:20:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:15.888 05:20:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1604982 00:05:15.888 05:20:18 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:15.888 05:20:18 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:15.888 05:20:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1604982' 00:05:15.888 killing process with pid 1604982 00:05:15.888 05:20:18 -- common/autotest_common.sh@955 -- # kill 1604982 00:05:15.888 05:20:18 -- common/autotest_common.sh@960 -- # wait 1604982 00:05:15.888 [2024-12-07 05:20:19.044925] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:16.149 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:16.149 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:16.149 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:16.149 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:16.149 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:16.149 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:16.149 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:16.149 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:16.149 00:05:16.149 real 0m6.482s 00:05:16.149 user 0m15.361s 00:05:16.149 sys 0m0.405s 00:05:16.149 05:20:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:16.149 05:20:19 -- common/autotest_common.sh@10 -- # set +x 00:05:16.149 ************************************ 00:05:16.149 END TEST event_scheduler 00:05:16.149 ************************************ 00:05:16.149 05:20:19 -- event/event.sh@51 -- # modprobe -n nbd 00:05:16.149 05:20:19 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:16.149 05:20:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.149 05:20:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.149 05:20:19 -- common/autotest_common.sh@10 -- # set +x 00:05:16.149 ************************************ 00:05:16.149 START TEST app_repeat 00:05:16.149 ************************************ 00:05:16.149 05:20:19 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:05:16.149 05:20:19 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.149 05:20:19 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.149 05:20:19 -- event/event.sh@13 -- # local nbd_list 00:05:16.149 05:20:19 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.149 05:20:19 -- event/event.sh@14 -- # local bdev_list 00:05:16.149 05:20:19 -- event/event.sh@15 -- # local repeat_times=4 00:05:16.149 05:20:19 -- event/event.sh@17 -- # modprobe nbd 00:05:16.149 05:20:19 -- event/event.sh@19 -- # repeat_pid=1606368 00:05:16.149 05:20:19 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.149 05:20:19 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1606368' 00:05:16.149 Process app_repeat pid: 1606368 00:05:16.149 05:20:19 -- event/event.sh@23 -- # for i in {0..2} 00:05:16.149 05:20:19 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:16.149 spdk_app_start Round 0 00:05:16.149 05:20:19 -- event/event.sh@25 -- # waitforlisten 1606368 /var/tmp/spdk-nbd.sock 00:05:16.149 05:20:19 -- common/autotest_common.sh@829 -- # '[' -z 1606368 ']' 00:05:16.149 05:20:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:16.149 05:20:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.149 05:20:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:16.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:16.149 05:20:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.149 05:20:19 -- common/autotest_common.sh@10 -- # set +x 00:05:16.149 05:20:19 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:16.149 [2024-12-07 05:20:19.310350] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:16.149 [2024-12-07 05:20:19.310435] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1606368 ] 00:05:16.149 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.149 [2024-12-07 05:20:19.374614] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.409 [2024-12-07 05:20:19.440756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.409 [2024-12-07 05:20:19.440759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.979 05:20:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.979 05:20:20 -- common/autotest_common.sh@862 -- # return 0 00:05:16.979 05:20:20 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.238 Malloc0 00:05:17.238 05:20:20 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.238 Malloc1 00:05:17.238 05:20:20 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.238 05:20:20 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.238 05:20:20 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.239 05:20:20 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:17.239 05:20:20 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.239 05:20:20 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:17.239 05:20:20 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.239 05:20:20 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.239 05:20:20 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.239 05:20:20 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:17.239 05:20:20 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.239 05:20:20 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:17.239 05:20:20 -- bdev/nbd_common.sh@12 -- # local i 00:05:17.239 05:20:20 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:17.239 05:20:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.239 05:20:20 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:17.499 /dev/nbd0 00:05:17.499 05:20:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:17.499 05:20:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:17.499 05:20:20 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:17.499 05:20:20 -- common/autotest_common.sh@867 -- # local i 00:05:17.499 05:20:20 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:17.499 05:20:20 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:17.499 05:20:20 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:17.499 05:20:20 -- common/autotest_common.sh@871 -- # break 00:05:17.499 05:20:20 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:17.499 05:20:20 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:17.499 05:20:20 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.499 1+0 records in 00:05:17.499 1+0 records out 00:05:17.499 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297115 s, 13.8 MB/s 00:05:17.499 05:20:20 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.499 05:20:20 -- common/autotest_common.sh@884 -- # size=4096 00:05:17.499 05:20:20 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.499 05:20:20 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:17.499 05:20:20 -- common/autotest_common.sh@887 -- # return 0 00:05:17.499 05:20:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.499 05:20:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.499 05:20:20 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:17.760 /dev/nbd1 00:05:17.760 05:20:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:17.760 05:20:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:17.760 05:20:20 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:17.760 05:20:20 -- common/autotest_common.sh@867 -- # local i 00:05:17.760 05:20:20 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:17.760 05:20:20 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:17.760 05:20:20 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:17.760 05:20:20 -- common/autotest_common.sh@871 -- # break 00:05:17.760 05:20:20 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:17.760 05:20:20 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:17.760 05:20:20 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.760 1+0 records in 00:05:17.760 1+0 records out 00:05:17.760 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293291 s, 14.0 MB/s 00:05:17.760 05:20:20 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.760 05:20:20 -- common/autotest_common.sh@884 -- # size=4096 00:05:17.760 05:20:20 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.760 05:20:20 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:17.760 05:20:20 -- common/autotest_common.sh@887 -- # return 0 00:05:17.760 05:20:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.760 05:20:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.760 05:20:20 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.760 05:20:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.760 05:20:20 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.760 05:20:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:17.760 { 00:05:17.760 "nbd_device": "/dev/nbd0", 00:05:17.760 "bdev_name": "Malloc0" 00:05:17.760 }, 00:05:17.760 { 00:05:17.760 "nbd_device": "/dev/nbd1", 00:05:17.760 "bdev_name": "Malloc1" 00:05:17.760 } 00:05:17.760 ]' 00:05:17.760 05:20:20 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:17.760 { 00:05:17.760 "nbd_device": "/dev/nbd0", 00:05:17.760 "bdev_name": "Malloc0" 00:05:17.760 }, 00:05:17.760 { 00:05:17.760 "nbd_device": "/dev/nbd1", 00:05:17.760 "bdev_name": "Malloc1" 00:05:17.760 } 00:05:17.760 ]' 00:05:17.760 05:20:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:18.020 /dev/nbd1' 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:18.020 /dev/nbd1' 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@65 -- # count=2 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@95 -- # count=2 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:18.020 256+0 records in 00:05:18.020 256+0 records out 00:05:18.020 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127494 s, 82.2 MB/s 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:18.020 256+0 records in 00:05:18.020 256+0 records out 00:05:18.020 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0164662 s, 63.7 MB/s 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:18.020 256+0 records in 00:05:18.020 256+0 records out 00:05:18.020 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214729 s, 48.8 MB/s 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@51 -- # local i 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.020 05:20:21 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:18.285 05:20:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:18.285 05:20:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:18.285 05:20:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:18.285 05:20:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.285 05:20:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.285 05:20:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:18.285 05:20:21 -- bdev/nbd_common.sh@41 -- # break 00:05:18.285 05:20:21 -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.285 05:20:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.285 05:20:21 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:18.285 05:20:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:18.285 05:20:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:18.285 05:20:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:18.285 05:20:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.285 05:20:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.285 05:20:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:18.285 05:20:21 -- bdev/nbd_common.sh@41 -- # break 00:05:18.285 05:20:21 -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.285 05:20:21 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.285 05:20:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.285 05:20:21 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.546 05:20:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:18.546 05:20:21 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:18.546 05:20:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.546 05:20:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:18.546 05:20:21 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:18.546 05:20:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.546 05:20:21 -- bdev/nbd_common.sh@65 -- # true 00:05:18.546 05:20:21 -- bdev/nbd_common.sh@65 -- # count=0 00:05:18.546 05:20:21 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:18.546 05:20:21 -- bdev/nbd_common.sh@104 -- # count=0 00:05:18.546 05:20:21 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:18.546 05:20:21 -- bdev/nbd_common.sh@109 -- # return 0 00:05:18.546 05:20:21 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:18.806 05:20:21 -- event/event.sh@35 -- # sleep 3 00:05:18.806 [2024-12-07 05:20:21.973167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.806 [2024-12-07 05:20:22.034159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.806 [2024-12-07 05:20:22.034164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.066 [2024-12-07 05:20:22.065901] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:19.066 [2024-12-07 05:20:22.065936] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:21.626 05:20:24 -- event/event.sh@23 -- # for i in {0..2} 00:05:21.626 05:20:24 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:21.626 spdk_app_start Round 1 00:05:21.626 05:20:24 -- event/event.sh@25 -- # waitforlisten 1606368 /var/tmp/spdk-nbd.sock 00:05:21.626 05:20:24 -- common/autotest_common.sh@829 -- # '[' -z 1606368 ']' 00:05:21.626 05:20:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:21.626 05:20:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.626 05:20:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:21.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:21.626 05:20:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.626 05:20:24 -- common/autotest_common.sh@10 -- # set +x 00:05:21.886 05:20:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.886 05:20:25 -- common/autotest_common.sh@862 -- # return 0 00:05:21.886 05:20:25 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.147 Malloc0 00:05:22.147 05:20:25 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.147 Malloc1 00:05:22.147 05:20:25 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.147 05:20:25 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.147 05:20:25 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.147 05:20:25 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:22.147 05:20:25 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.147 05:20:25 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:22.147 05:20:25 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.147 05:20:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.147 05:20:25 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.147 05:20:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:22.147 05:20:25 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.147 05:20:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:22.147 05:20:25 -- bdev/nbd_common.sh@12 -- # local i 00:05:22.147 05:20:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:22.147 05:20:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.147 05:20:25 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:22.407 /dev/nbd0 00:05:22.407 05:20:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:22.407 05:20:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:22.407 05:20:25 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:22.407 05:20:25 -- common/autotest_common.sh@867 -- # local i 00:05:22.407 05:20:25 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:22.407 05:20:25 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:22.407 05:20:25 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:22.407 05:20:25 -- common/autotest_common.sh@871 -- # break 00:05:22.407 05:20:25 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:22.407 05:20:25 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:22.407 05:20:25 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.407 1+0 records in 00:05:22.407 1+0 records out 00:05:22.407 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272919 s, 15.0 MB/s 00:05:22.407 05:20:25 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.407 05:20:25 -- common/autotest_common.sh@884 -- # size=4096 00:05:22.407 05:20:25 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.407 05:20:25 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:22.407 05:20:25 -- common/autotest_common.sh@887 -- # return 0 00:05:22.407 05:20:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.407 05:20:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.407 05:20:25 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:22.666 /dev/nbd1 00:05:22.666 05:20:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:22.666 05:20:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:22.666 05:20:25 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:22.666 05:20:25 -- common/autotest_common.sh@867 -- # local i 00:05:22.666 05:20:25 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:22.666 05:20:25 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:22.666 05:20:25 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:22.666 05:20:25 -- common/autotest_common.sh@871 -- # break 00:05:22.666 05:20:25 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:22.666 05:20:25 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:22.666 05:20:25 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.666 1+0 records in 00:05:22.666 1+0 records out 00:05:22.666 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285737 s, 14.3 MB/s 00:05:22.666 05:20:25 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.666 05:20:25 -- common/autotest_common.sh@884 -- # size=4096 00:05:22.666 05:20:25 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.666 05:20:25 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:22.666 05:20:25 -- common/autotest_common.sh@887 -- # return 0 00:05:22.666 05:20:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.666 05:20:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.666 05:20:25 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.666 05:20:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.666 05:20:25 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.666 05:20:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:22.666 { 00:05:22.666 "nbd_device": "/dev/nbd0", 00:05:22.666 "bdev_name": "Malloc0" 00:05:22.666 }, 00:05:22.666 { 00:05:22.666 "nbd_device": "/dev/nbd1", 00:05:22.666 "bdev_name": "Malloc1" 00:05:22.666 } 00:05:22.667 ]' 00:05:22.667 05:20:25 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:22.667 { 00:05:22.667 "nbd_device": "/dev/nbd0", 00:05:22.667 "bdev_name": "Malloc0" 00:05:22.667 }, 00:05:22.667 { 00:05:22.667 "nbd_device": "/dev/nbd1", 00:05:22.667 "bdev_name": "Malloc1" 00:05:22.667 } 00:05:22.667 ]' 00:05:22.667 05:20:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.927 05:20:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:22.927 /dev/nbd1' 00:05:22.927 05:20:25 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:22.927 /dev/nbd1' 00:05:22.927 05:20:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.927 05:20:25 -- bdev/nbd_common.sh@65 -- # count=2 00:05:22.927 05:20:25 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:22.927 05:20:25 -- bdev/nbd_common.sh@95 -- # count=2 00:05:22.927 05:20:25 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:22.927 05:20:25 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:22.927 05:20:25 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.927 05:20:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.927 05:20:25 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:22.927 05:20:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.927 05:20:25 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:22.927 05:20:25 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:22.927 256+0 records in 00:05:22.927 256+0 records out 00:05:22.927 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127093 s, 82.5 MB/s 00:05:22.927 05:20:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.927 05:20:25 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:22.927 256+0 records in 00:05:22.927 256+0 records out 00:05:22.927 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166623 s, 62.9 MB/s 00:05:22.927 05:20:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.927 05:20:25 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:22.927 256+0 records in 00:05:22.927 256+0 records out 00:05:22.927 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0177158 s, 59.2 MB/s 00:05:22.927 05:20:25 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:22.927 05:20:25 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.927 05:20:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.927 05:20:25 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:22.927 05:20:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.927 05:20:25 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:22.927 05:20:25 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:22.927 05:20:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.927 05:20:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:22.927 05:20:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.927 05:20:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:22.927 05:20:25 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.927 05:20:26 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:22.927 05:20:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.927 05:20:26 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.927 05:20:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:22.927 05:20:26 -- bdev/nbd_common.sh@51 -- # local i 00:05:22.927 05:20:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.927 05:20:26 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:23.188 05:20:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:23.189 05:20:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:23.189 05:20:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:23.189 05:20:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.189 05:20:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.189 05:20:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:23.189 05:20:26 -- bdev/nbd_common.sh@41 -- # break 00:05:23.189 05:20:26 -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.189 05:20:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.189 05:20:26 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:23.189 05:20:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:23.189 05:20:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:23.189 05:20:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:23.189 05:20:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.189 05:20:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.189 05:20:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:23.189 05:20:26 -- bdev/nbd_common.sh@41 -- # break 00:05:23.189 05:20:26 -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.189 05:20:26 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.189 05:20:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.189 05:20:26 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.449 05:20:26 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:23.449 05:20:26 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:23.449 05:20:26 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.449 05:20:26 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:23.449 05:20:26 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:23.449 05:20:26 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.449 05:20:26 -- bdev/nbd_common.sh@65 -- # true 00:05:23.449 05:20:26 -- bdev/nbd_common.sh@65 -- # count=0 00:05:23.449 05:20:26 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:23.449 05:20:26 -- bdev/nbd_common.sh@104 -- # count=0 00:05:23.449 05:20:26 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:23.449 05:20:26 -- bdev/nbd_common.sh@109 -- # return 0 00:05:23.449 05:20:26 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:23.709 05:20:26 -- event/event.sh@35 -- # sleep 3 00:05:23.709 [2024-12-07 05:20:26.879449] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:23.709 [2024-12-07 05:20:26.940127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.709 [2024-12-07 05:20:26.940130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.969 [2024-12-07 05:20:26.971904] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:23.970 [2024-12-07 05:20:26.971939] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:26.516 05:20:29 -- event/event.sh@23 -- # for i in {0..2} 00:05:26.516 05:20:29 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:26.516 spdk_app_start Round 2 00:05:26.516 05:20:29 -- event/event.sh@25 -- # waitforlisten 1606368 /var/tmp/spdk-nbd.sock 00:05:26.516 05:20:29 -- common/autotest_common.sh@829 -- # '[' -z 1606368 ']' 00:05:26.516 05:20:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:26.516 05:20:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.516 05:20:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:26.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:26.516 05:20:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.516 05:20:29 -- common/autotest_common.sh@10 -- # set +x 00:05:26.776 05:20:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.776 05:20:29 -- common/autotest_common.sh@862 -- # return 0 00:05:26.776 05:20:29 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.038 Malloc0 00:05:27.038 05:20:30 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.038 Malloc1 00:05:27.038 05:20:30 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.038 05:20:30 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.038 05:20:30 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.038 05:20:30 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:27.038 05:20:30 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.038 05:20:30 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:27.038 05:20:30 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.038 05:20:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.038 05:20:30 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.038 05:20:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:27.038 05:20:30 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.038 05:20:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:27.038 05:20:30 -- bdev/nbd_common.sh@12 -- # local i 00:05:27.038 05:20:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:27.038 05:20:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.038 05:20:30 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:27.300 /dev/nbd0 00:05:27.300 05:20:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:27.300 05:20:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:27.300 05:20:30 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:27.300 05:20:30 -- common/autotest_common.sh@867 -- # local i 00:05:27.300 05:20:30 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:27.300 05:20:30 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:27.300 05:20:30 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:27.300 05:20:30 -- common/autotest_common.sh@871 -- # break 00:05:27.300 05:20:30 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:27.300 05:20:30 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:27.300 05:20:30 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.300 1+0 records in 00:05:27.300 1+0 records out 00:05:27.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208982 s, 19.6 MB/s 00:05:27.300 05:20:30 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.300 05:20:30 -- common/autotest_common.sh@884 -- # size=4096 00:05:27.300 05:20:30 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.300 05:20:30 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:27.300 05:20:30 -- common/autotest_common.sh@887 -- # return 0 00:05:27.300 05:20:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.300 05:20:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.300 05:20:30 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:27.561 /dev/nbd1 00:05:27.561 05:20:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:27.561 05:20:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:27.561 05:20:30 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:27.561 05:20:30 -- common/autotest_common.sh@867 -- # local i 00:05:27.561 05:20:30 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:27.561 05:20:30 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:27.561 05:20:30 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:27.561 05:20:30 -- common/autotest_common.sh@871 -- # break 00:05:27.561 05:20:30 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:27.561 05:20:30 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:27.561 05:20:30 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.561 1+0 records in 00:05:27.561 1+0 records out 00:05:27.561 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293552 s, 14.0 MB/s 00:05:27.561 05:20:30 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.561 05:20:30 -- common/autotest_common.sh@884 -- # size=4096 00:05:27.561 05:20:30 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.561 05:20:30 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:27.561 05:20:30 -- common/autotest_common.sh@887 -- # return 0 00:05:27.561 05:20:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.561 05:20:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.561 05:20:30 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.561 05:20:30 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.561 05:20:30 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.561 05:20:30 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:27.561 { 00:05:27.561 "nbd_device": "/dev/nbd0", 00:05:27.561 "bdev_name": "Malloc0" 00:05:27.561 }, 00:05:27.561 { 00:05:27.561 "nbd_device": "/dev/nbd1", 00:05:27.561 "bdev_name": "Malloc1" 00:05:27.561 } 00:05:27.561 ]' 00:05:27.561 05:20:30 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:27.561 { 00:05:27.561 "nbd_device": "/dev/nbd0", 00:05:27.561 "bdev_name": "Malloc0" 00:05:27.561 }, 00:05:27.561 { 00:05:27.561 "nbd_device": "/dev/nbd1", 00:05:27.561 "bdev_name": "Malloc1" 00:05:27.561 } 00:05:27.561 ]' 00:05:27.561 05:20:30 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:27.822 /dev/nbd1' 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:27.822 /dev/nbd1' 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@65 -- # count=2 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@95 -- # count=2 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:27.822 256+0 records in 00:05:27.822 256+0 records out 00:05:27.822 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126997 s, 82.6 MB/s 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:27.822 256+0 records in 00:05:27.822 256+0 records out 00:05:27.822 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0164375 s, 63.8 MB/s 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:27.822 256+0 records in 00:05:27.822 256+0 records out 00:05:27.822 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0184756 s, 56.8 MB/s 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@51 -- # local i 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.822 05:20:30 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:28.083 05:20:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:28.083 05:20:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:28.083 05:20:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:28.083 05:20:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.083 05:20:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.083 05:20:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:28.083 05:20:31 -- bdev/nbd_common.sh@41 -- # break 00:05:28.083 05:20:31 -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.083 05:20:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.083 05:20:31 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:28.083 05:20:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:28.083 05:20:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:28.083 05:20:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:28.083 05:20:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.083 05:20:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.083 05:20:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:28.083 05:20:31 -- bdev/nbd_common.sh@41 -- # break 00:05:28.083 05:20:31 -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.083 05:20:31 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.083 05:20:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.083 05:20:31 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.344 05:20:31 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:28.344 05:20:31 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:28.344 05:20:31 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.344 05:20:31 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:28.344 05:20:31 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:28.344 05:20:31 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.344 05:20:31 -- bdev/nbd_common.sh@65 -- # true 00:05:28.344 05:20:31 -- bdev/nbd_common.sh@65 -- # count=0 00:05:28.344 05:20:31 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:28.344 05:20:31 -- bdev/nbd_common.sh@104 -- # count=0 00:05:28.344 05:20:31 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:28.344 05:20:31 -- bdev/nbd_common.sh@109 -- # return 0 00:05:28.344 05:20:31 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:28.604 05:20:31 -- event/event.sh@35 -- # sleep 3 00:05:28.604 [2024-12-07 05:20:31.796095] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.865 [2024-12-07 05:20:31.857395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.865 [2024-12-07 05:20:31.857399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.865 [2024-12-07 05:20:31.888998] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:28.865 [2024-12-07 05:20:31.889037] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:32.168 05:20:34 -- event/event.sh@38 -- # waitforlisten 1606368 /var/tmp/spdk-nbd.sock 00:05:32.168 05:20:34 -- common/autotest_common.sh@829 -- # '[' -z 1606368 ']' 00:05:32.168 05:20:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:32.168 05:20:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.168 05:20:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:32.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:32.168 05:20:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.168 05:20:34 -- common/autotest_common.sh@10 -- # set +x 00:05:32.168 05:20:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.168 05:20:34 -- common/autotest_common.sh@862 -- # return 0 00:05:32.168 05:20:34 -- event/event.sh@39 -- # killprocess 1606368 00:05:32.168 05:20:34 -- common/autotest_common.sh@936 -- # '[' -z 1606368 ']' 00:05:32.168 05:20:34 -- common/autotest_common.sh@940 -- # kill -0 1606368 00:05:32.168 05:20:34 -- common/autotest_common.sh@941 -- # uname 00:05:32.168 05:20:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:32.168 05:20:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1606368 00:05:32.168 05:20:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:32.168 05:20:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:32.168 05:20:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1606368' 00:05:32.168 killing process with pid 1606368 00:05:32.168 05:20:34 -- common/autotest_common.sh@955 -- # kill 1606368 00:05:32.168 05:20:34 -- common/autotest_common.sh@960 -- # wait 1606368 00:05:32.168 spdk_app_start is called in Round 0. 00:05:32.168 Shutdown signal received, stop current app iteration 00:05:32.168 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:32.168 spdk_app_start is called in Round 1. 00:05:32.168 Shutdown signal received, stop current app iteration 00:05:32.168 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:32.168 spdk_app_start is called in Round 2. 00:05:32.168 Shutdown signal received, stop current app iteration 00:05:32.168 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:32.168 spdk_app_start is called in Round 3. 00:05:32.168 Shutdown signal received, stop current app iteration 00:05:32.168 05:20:35 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:32.168 05:20:35 -- event/event.sh@42 -- # return 0 00:05:32.168 00:05:32.168 real 0m15.737s 00:05:32.168 user 0m34.025s 00:05:32.168 sys 0m2.139s 00:05:32.168 05:20:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:32.168 05:20:35 -- common/autotest_common.sh@10 -- # set +x 00:05:32.168 ************************************ 00:05:32.168 END TEST app_repeat 00:05:32.168 ************************************ 00:05:32.168 05:20:35 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:32.168 05:20:35 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:32.168 05:20:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.168 05:20:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.168 05:20:35 -- common/autotest_common.sh@10 -- # set +x 00:05:32.168 ************************************ 00:05:32.168 START TEST cpu_locks 00:05:32.168 ************************************ 00:05:32.168 05:20:35 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:32.168 * Looking for test storage... 00:05:32.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:32.168 05:20:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:32.168 05:20:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:32.168 05:20:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:32.168 05:20:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:32.168 05:20:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:32.168 05:20:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:32.168 05:20:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:32.168 05:20:35 -- scripts/common.sh@335 -- # IFS=.-: 00:05:32.168 05:20:35 -- scripts/common.sh@335 -- # read -ra ver1 00:05:32.168 05:20:35 -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.168 05:20:35 -- scripts/common.sh@336 -- # read -ra ver2 00:05:32.168 05:20:35 -- scripts/common.sh@337 -- # local 'op=<' 00:05:32.168 05:20:35 -- scripts/common.sh@339 -- # ver1_l=2 00:05:32.168 05:20:35 -- scripts/common.sh@340 -- # ver2_l=1 00:05:32.168 05:20:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:32.168 05:20:35 -- scripts/common.sh@343 -- # case "$op" in 00:05:32.168 05:20:35 -- scripts/common.sh@344 -- # : 1 00:05:32.168 05:20:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:32.168 05:20:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.168 05:20:35 -- scripts/common.sh@364 -- # decimal 1 00:05:32.168 05:20:35 -- scripts/common.sh@352 -- # local d=1 00:05:32.168 05:20:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.168 05:20:35 -- scripts/common.sh@354 -- # echo 1 00:05:32.168 05:20:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:32.168 05:20:35 -- scripts/common.sh@365 -- # decimal 2 00:05:32.168 05:20:35 -- scripts/common.sh@352 -- # local d=2 00:05:32.168 05:20:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.168 05:20:35 -- scripts/common.sh@354 -- # echo 2 00:05:32.168 05:20:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:32.168 05:20:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:32.168 05:20:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:32.168 05:20:35 -- scripts/common.sh@367 -- # return 0 00:05:32.168 05:20:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.168 05:20:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:32.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.169 --rc genhtml_branch_coverage=1 00:05:32.169 --rc genhtml_function_coverage=1 00:05:32.169 --rc genhtml_legend=1 00:05:32.169 --rc geninfo_all_blocks=1 00:05:32.169 --rc geninfo_unexecuted_blocks=1 00:05:32.169 00:05:32.169 ' 00:05:32.169 05:20:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:32.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.169 --rc genhtml_branch_coverage=1 00:05:32.169 --rc genhtml_function_coverage=1 00:05:32.169 --rc genhtml_legend=1 00:05:32.169 --rc geninfo_all_blocks=1 00:05:32.169 --rc geninfo_unexecuted_blocks=1 00:05:32.169 00:05:32.169 ' 00:05:32.169 05:20:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:32.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.169 --rc genhtml_branch_coverage=1 00:05:32.169 --rc genhtml_function_coverage=1 00:05:32.169 --rc genhtml_legend=1 00:05:32.169 --rc geninfo_all_blocks=1 00:05:32.169 --rc geninfo_unexecuted_blocks=1 00:05:32.169 00:05:32.169 ' 00:05:32.169 05:20:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:32.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.169 --rc genhtml_branch_coverage=1 00:05:32.169 --rc genhtml_function_coverage=1 00:05:32.169 --rc genhtml_legend=1 00:05:32.169 --rc geninfo_all_blocks=1 00:05:32.169 --rc geninfo_unexecuted_blocks=1 00:05:32.169 00:05:32.169 ' 00:05:32.169 05:20:35 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:32.169 05:20:35 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:32.169 05:20:35 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:32.169 05:20:35 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:32.169 05:20:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.169 05:20:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.169 05:20:35 -- common/autotest_common.sh@10 -- # set +x 00:05:32.169 ************************************ 00:05:32.169 START TEST default_locks 00:05:32.169 ************************************ 00:05:32.169 05:20:35 -- common/autotest_common.sh@1114 -- # default_locks 00:05:32.169 05:20:35 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1609904 00:05:32.169 05:20:35 -- event/cpu_locks.sh@47 -- # waitforlisten 1609904 00:05:32.169 05:20:35 -- common/autotest_common.sh@829 -- # '[' -z 1609904 ']' 00:05:32.169 05:20:35 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.169 05:20:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.169 05:20:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.169 05:20:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.169 05:20:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.169 05:20:35 -- common/autotest_common.sh@10 -- # set +x 00:05:32.169 [2024-12-07 05:20:35.329673] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:32.169 [2024-12-07 05:20:35.329747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1609904 ] 00:05:32.169 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.169 [2024-12-07 05:20:35.396941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.429 [2024-12-07 05:20:35.469317] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:32.429 [2024-12-07 05:20:35.469471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.998 05:20:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.998 05:20:36 -- common/autotest_common.sh@862 -- # return 0 00:05:32.998 05:20:36 -- event/cpu_locks.sh@49 -- # locks_exist 1609904 00:05:32.998 05:20:36 -- event/cpu_locks.sh@22 -- # lslocks -p 1609904 00:05:32.998 05:20:36 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:33.259 lslocks: write error 00:05:33.259 05:20:36 -- event/cpu_locks.sh@50 -- # killprocess 1609904 00:05:33.259 05:20:36 -- common/autotest_common.sh@936 -- # '[' -z 1609904 ']' 00:05:33.259 05:20:36 -- common/autotest_common.sh@940 -- # kill -0 1609904 00:05:33.259 05:20:36 -- common/autotest_common.sh@941 -- # uname 00:05:33.259 05:20:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:33.259 05:20:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1609904 00:05:33.520 05:20:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:33.520 05:20:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:33.520 05:20:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1609904' 00:05:33.520 killing process with pid 1609904 00:05:33.520 05:20:36 -- common/autotest_common.sh@955 -- # kill 1609904 00:05:33.520 05:20:36 -- common/autotest_common.sh@960 -- # wait 1609904 00:05:33.781 05:20:36 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1609904 00:05:33.781 05:20:36 -- common/autotest_common.sh@650 -- # local es=0 00:05:33.781 05:20:36 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1609904 00:05:33.781 05:20:36 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:33.781 05:20:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:33.781 05:20:36 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:33.781 05:20:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:33.781 05:20:36 -- common/autotest_common.sh@653 -- # waitforlisten 1609904 00:05:33.781 05:20:36 -- common/autotest_common.sh@829 -- # '[' -z 1609904 ']' 00:05:33.781 05:20:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.781 05:20:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.781 05:20:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.781 05:20:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.781 05:20:36 -- common/autotest_common.sh@10 -- # set +x 00:05:33.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1609904) - No such process 00:05:33.781 ERROR: process (pid: 1609904) is no longer running 00:05:33.781 05:20:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.781 05:20:36 -- common/autotest_common.sh@862 -- # return 1 00:05:33.781 05:20:36 -- common/autotest_common.sh@653 -- # es=1 00:05:33.781 05:20:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:33.781 05:20:36 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:33.781 05:20:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:33.781 05:20:36 -- event/cpu_locks.sh@54 -- # no_locks 00:05:33.781 05:20:36 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:33.781 05:20:36 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:33.781 05:20:36 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:33.781 00:05:33.781 real 0m1.501s 00:05:33.781 user 0m1.622s 00:05:33.781 sys 0m0.509s 00:05:33.781 05:20:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.781 05:20:36 -- common/autotest_common.sh@10 -- # set +x 00:05:33.781 ************************************ 00:05:33.781 END TEST default_locks 00:05:33.781 ************************************ 00:05:33.781 05:20:36 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:33.781 05:20:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.781 05:20:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.781 05:20:36 -- common/autotest_common.sh@10 -- # set +x 00:05:33.781 ************************************ 00:05:33.781 START TEST default_locks_via_rpc 00:05:33.781 ************************************ 00:05:33.781 05:20:36 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:05:33.781 05:20:36 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1610206 00:05:33.781 05:20:36 -- event/cpu_locks.sh@63 -- # waitforlisten 1610206 00:05:33.781 05:20:36 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.781 05:20:36 -- common/autotest_common.sh@829 -- # '[' -z 1610206 ']' 00:05:33.781 05:20:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.781 05:20:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.781 05:20:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.781 05:20:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.781 05:20:36 -- common/autotest_common.sh@10 -- # set +x 00:05:33.781 [2024-12-07 05:20:36.867563] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:33.781 [2024-12-07 05:20:36.867627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1610206 ] 00:05:33.781 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.781 [2024-12-07 05:20:36.930160] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.781 [2024-12-07 05:20:36.995876] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:33.781 [2024-12-07 05:20:36.996003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.720 05:20:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.720 05:20:37 -- common/autotest_common.sh@862 -- # return 0 00:05:34.720 05:20:37 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:34.720 05:20:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.720 05:20:37 -- common/autotest_common.sh@10 -- # set +x 00:05:34.720 05:20:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.720 05:20:37 -- event/cpu_locks.sh@67 -- # no_locks 00:05:34.720 05:20:37 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:34.720 05:20:37 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:34.720 05:20:37 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:34.720 05:20:37 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:34.720 05:20:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.720 05:20:37 -- common/autotest_common.sh@10 -- # set +x 00:05:34.720 05:20:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.720 05:20:37 -- event/cpu_locks.sh@71 -- # locks_exist 1610206 00:05:34.720 05:20:37 -- event/cpu_locks.sh@22 -- # lslocks -p 1610206 00:05:34.720 05:20:37 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:34.720 05:20:37 -- event/cpu_locks.sh@73 -- # killprocess 1610206 00:05:34.720 05:20:37 -- common/autotest_common.sh@936 -- # '[' -z 1610206 ']' 00:05:34.720 05:20:37 -- common/autotest_common.sh@940 -- # kill -0 1610206 00:05:34.720 05:20:37 -- common/autotest_common.sh@941 -- # uname 00:05:34.720 05:20:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:34.720 05:20:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1610206 00:05:34.979 05:20:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:34.979 05:20:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:34.979 05:20:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1610206' 00:05:34.979 killing process with pid 1610206 00:05:34.979 05:20:37 -- common/autotest_common.sh@955 -- # kill 1610206 00:05:34.979 05:20:37 -- common/autotest_common.sh@960 -- # wait 1610206 00:05:34.979 00:05:34.979 real 0m1.373s 00:05:34.979 user 0m1.480s 00:05:34.979 sys 0m0.439s 00:05:34.979 05:20:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:34.979 05:20:38 -- common/autotest_common.sh@10 -- # set +x 00:05:34.979 ************************************ 00:05:34.979 END TEST default_locks_via_rpc 00:05:34.979 ************************************ 00:05:35.239 05:20:38 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:35.239 05:20:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:35.239 05:20:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.239 05:20:38 -- common/autotest_common.sh@10 -- # set +x 00:05:35.239 ************************************ 00:05:35.239 START TEST non_locking_app_on_locked_coremask 00:05:35.239 ************************************ 00:05:35.239 05:20:38 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:05:35.239 05:20:38 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1610424 00:05:35.239 05:20:38 -- event/cpu_locks.sh@81 -- # waitforlisten 1610424 /var/tmp/spdk.sock 00:05:35.239 05:20:38 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:35.239 05:20:38 -- common/autotest_common.sh@829 -- # '[' -z 1610424 ']' 00:05:35.239 05:20:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.239 05:20:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.239 05:20:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.239 05:20:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.239 05:20:38 -- common/autotest_common.sh@10 -- # set +x 00:05:35.239 [2024-12-07 05:20:38.283169] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:35.239 [2024-12-07 05:20:38.283231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1610424 ] 00:05:35.239 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.239 [2024-12-07 05:20:38.345137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.239 [2024-12-07 05:20:38.410456] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:35.239 [2024-12-07 05:20:38.410582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.177 05:20:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.177 05:20:39 -- common/autotest_common.sh@862 -- # return 0 00:05:36.177 05:20:39 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:36.177 05:20:39 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1610729 00:05:36.177 05:20:39 -- event/cpu_locks.sh@85 -- # waitforlisten 1610729 /var/tmp/spdk2.sock 00:05:36.177 05:20:39 -- common/autotest_common.sh@829 -- # '[' -z 1610729 ']' 00:05:36.177 05:20:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:36.177 05:20:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.177 05:20:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:36.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:36.177 05:20:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.177 05:20:39 -- common/autotest_common.sh@10 -- # set +x 00:05:36.177 [2024-12-07 05:20:39.081014] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:36.177 [2024-12-07 05:20:39.081064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1610729 ] 00:05:36.177 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.177 [2024-12-07 05:20:39.170356] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:36.177 [2024-12-07 05:20:39.170384] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.177 [2024-12-07 05:20:39.297612] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:36.177 [2024-12-07 05:20:39.297741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.746 05:20:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.746 05:20:39 -- common/autotest_common.sh@862 -- # return 0 00:05:36.746 05:20:39 -- event/cpu_locks.sh@87 -- # locks_exist 1610424 00:05:36.746 05:20:39 -- event/cpu_locks.sh@22 -- # lslocks -p 1610424 00:05:36.746 05:20:39 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:37.317 lslocks: write error 00:05:37.317 05:20:40 -- event/cpu_locks.sh@89 -- # killprocess 1610424 00:05:37.317 05:20:40 -- common/autotest_common.sh@936 -- # '[' -z 1610424 ']' 00:05:37.317 05:20:40 -- common/autotest_common.sh@940 -- # kill -0 1610424 00:05:37.317 05:20:40 -- common/autotest_common.sh@941 -- # uname 00:05:37.317 05:20:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:37.317 05:20:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1610424 00:05:37.317 05:20:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:37.317 05:20:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:37.317 05:20:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1610424' 00:05:37.317 killing process with pid 1610424 00:05:37.317 05:20:40 -- common/autotest_common.sh@955 -- # kill 1610424 00:05:37.317 05:20:40 -- common/autotest_common.sh@960 -- # wait 1610424 00:05:37.886 05:20:40 -- event/cpu_locks.sh@90 -- # killprocess 1610729 00:05:37.886 05:20:40 -- common/autotest_common.sh@936 -- # '[' -z 1610729 ']' 00:05:37.886 05:20:40 -- common/autotest_common.sh@940 -- # kill -0 1610729 00:05:37.886 05:20:40 -- common/autotest_common.sh@941 -- # uname 00:05:37.886 05:20:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:37.886 05:20:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1610729 00:05:37.886 05:20:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:37.886 05:20:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:37.886 05:20:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1610729' 00:05:37.886 killing process with pid 1610729 00:05:37.886 05:20:40 -- common/autotest_common.sh@955 -- # kill 1610729 00:05:37.886 05:20:40 -- common/autotest_common.sh@960 -- # wait 1610729 00:05:38.146 00:05:38.146 real 0m2.933s 00:05:38.146 user 0m3.187s 00:05:38.146 sys 0m0.898s 00:05:38.146 05:20:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:38.146 05:20:41 -- common/autotest_common.sh@10 -- # set +x 00:05:38.146 ************************************ 00:05:38.146 END TEST non_locking_app_on_locked_coremask 00:05:38.146 ************************************ 00:05:38.146 05:20:41 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:38.146 05:20:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:38.146 05:20:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.146 05:20:41 -- common/autotest_common.sh@10 -- # set +x 00:05:38.146 ************************************ 00:05:38.146 START TEST locking_app_on_unlocked_coremask 00:05:38.146 ************************************ 00:05:38.146 05:20:41 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:05:38.146 05:20:41 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1611108 00:05:38.146 05:20:41 -- event/cpu_locks.sh@99 -- # waitforlisten 1611108 /var/tmp/spdk.sock 00:05:38.146 05:20:41 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:38.146 05:20:41 -- common/autotest_common.sh@829 -- # '[' -z 1611108 ']' 00:05:38.146 05:20:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.146 05:20:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.146 05:20:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.146 05:20:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.146 05:20:41 -- common/autotest_common.sh@10 -- # set +x 00:05:38.146 [2024-12-07 05:20:41.261932] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:38.146 [2024-12-07 05:20:41.261986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1611108 ] 00:05:38.146 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.146 [2024-12-07 05:20:41.322575] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:38.146 [2024-12-07 05:20:41.322610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.146 [2024-12-07 05:20:41.384023] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:38.146 [2024-12-07 05:20:41.384166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.089 05:20:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.089 05:20:42 -- common/autotest_common.sh@862 -- # return 0 00:05:39.089 05:20:42 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1611374 00:05:39.089 05:20:42 -- event/cpu_locks.sh@103 -- # waitforlisten 1611374 /var/tmp/spdk2.sock 00:05:39.089 05:20:42 -- common/autotest_common.sh@829 -- # '[' -z 1611374 ']' 00:05:39.089 05:20:42 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:39.089 05:20:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:39.089 05:20:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.089 05:20:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:39.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:39.090 05:20:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.090 05:20:42 -- common/autotest_common.sh@10 -- # set +x 00:05:39.090 [2024-12-07 05:20:42.095991] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:39.090 [2024-12-07 05:20:42.096049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1611374 ] 00:05:39.090 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.090 [2024-12-07 05:20:42.190680] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.090 [2024-12-07 05:20:42.313550] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:39.090 [2024-12-07 05:20:42.313687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.660 05:20:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.660 05:20:42 -- common/autotest_common.sh@862 -- # return 0 00:05:39.660 05:20:42 -- event/cpu_locks.sh@105 -- # locks_exist 1611374 00:05:39.660 05:20:42 -- event/cpu_locks.sh@22 -- # lslocks -p 1611374 00:05:39.660 05:20:42 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:40.230 lslocks: write error 00:05:40.230 05:20:43 -- event/cpu_locks.sh@107 -- # killprocess 1611108 00:05:40.230 05:20:43 -- common/autotest_common.sh@936 -- # '[' -z 1611108 ']' 00:05:40.230 05:20:43 -- common/autotest_common.sh@940 -- # kill -0 1611108 00:05:40.230 05:20:43 -- common/autotest_common.sh@941 -- # uname 00:05:40.230 05:20:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:40.230 05:20:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1611108 00:05:40.230 05:20:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:40.230 05:20:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:40.230 05:20:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1611108' 00:05:40.230 killing process with pid 1611108 00:05:40.230 05:20:43 -- common/autotest_common.sh@955 -- # kill 1611108 00:05:40.230 05:20:43 -- common/autotest_common.sh@960 -- # wait 1611108 00:05:40.801 05:20:43 -- event/cpu_locks.sh@108 -- # killprocess 1611374 00:05:40.801 05:20:43 -- common/autotest_common.sh@936 -- # '[' -z 1611374 ']' 00:05:40.801 05:20:43 -- common/autotest_common.sh@940 -- # kill -0 1611374 00:05:40.801 05:20:43 -- common/autotest_common.sh@941 -- # uname 00:05:40.801 05:20:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:40.801 05:20:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1611374 00:05:40.801 05:20:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:40.801 05:20:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:40.801 05:20:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1611374' 00:05:40.801 killing process with pid 1611374 00:05:40.801 05:20:43 -- common/autotest_common.sh@955 -- # kill 1611374 00:05:40.801 05:20:43 -- common/autotest_common.sh@960 -- # wait 1611374 00:05:41.061 00:05:41.061 real 0m2.951s 00:05:41.061 user 0m3.250s 00:05:41.061 sys 0m0.892s 00:05:41.061 05:20:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:41.061 05:20:44 -- common/autotest_common.sh@10 -- # set +x 00:05:41.061 ************************************ 00:05:41.061 END TEST locking_app_on_unlocked_coremask 00:05:41.061 ************************************ 00:05:41.061 05:20:44 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:41.061 05:20:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.061 05:20:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.061 05:20:44 -- common/autotest_common.sh@10 -- # set +x 00:05:41.061 ************************************ 00:05:41.061 START TEST locking_app_on_locked_coremask 00:05:41.061 ************************************ 00:05:41.061 05:20:44 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:05:41.061 05:20:44 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1611817 00:05:41.061 05:20:44 -- event/cpu_locks.sh@116 -- # waitforlisten 1611817 /var/tmp/spdk.sock 00:05:41.061 05:20:44 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:41.061 05:20:44 -- common/autotest_common.sh@829 -- # '[' -z 1611817 ']' 00:05:41.061 05:20:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.061 05:20:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.061 05:20:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.061 05:20:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.061 05:20:44 -- common/autotest_common.sh@10 -- # set +x 00:05:41.061 [2024-12-07 05:20:44.259551] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:41.061 [2024-12-07 05:20:44.259604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1611817 ] 00:05:41.061 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.321 [2024-12-07 05:20:44.320931] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.321 [2024-12-07 05:20:44.381525] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:41.321 [2024-12-07 05:20:44.381674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.891 05:20:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.891 05:20:45 -- common/autotest_common.sh@862 -- # return 0 00:05:41.891 05:20:45 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:41.891 05:20:45 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1611917 00:05:41.891 05:20:45 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1611917 /var/tmp/spdk2.sock 00:05:41.891 05:20:45 -- common/autotest_common.sh@650 -- # local es=0 00:05:41.891 05:20:45 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1611917 /var/tmp/spdk2.sock 00:05:41.891 05:20:45 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:41.891 05:20:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.891 05:20:45 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:41.891 05:20:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.891 05:20:45 -- common/autotest_common.sh@653 -- # waitforlisten 1611917 /var/tmp/spdk2.sock 00:05:41.891 05:20:45 -- common/autotest_common.sh@829 -- # '[' -z 1611917 ']' 00:05:41.891 05:20:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.891 05:20:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.891 05:20:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.891 05:20:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.891 05:20:45 -- common/autotest_common.sh@10 -- # set +x 00:05:41.891 [2024-12-07 05:20:45.084089] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:41.891 [2024-12-07 05:20:45.084141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1611917 ] 00:05:41.891 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.151 [2024-12-07 05:20:45.179626] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1611817 has claimed it. 00:05:42.151 [2024-12-07 05:20:45.179671] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:42.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1611917) - No such process 00:05:42.719 ERROR: process (pid: 1611917) is no longer running 00:05:42.719 05:20:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.719 05:20:45 -- common/autotest_common.sh@862 -- # return 1 00:05:42.719 05:20:45 -- common/autotest_common.sh@653 -- # es=1 00:05:42.719 05:20:45 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:42.719 05:20:45 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:42.719 05:20:45 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:42.719 05:20:45 -- event/cpu_locks.sh@122 -- # locks_exist 1611817 00:05:42.719 05:20:45 -- event/cpu_locks.sh@22 -- # lslocks -p 1611817 00:05:42.719 05:20:45 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:43.285 lslocks: write error 00:05:43.285 05:20:46 -- event/cpu_locks.sh@124 -- # killprocess 1611817 00:05:43.285 05:20:46 -- common/autotest_common.sh@936 -- # '[' -z 1611817 ']' 00:05:43.285 05:20:46 -- common/autotest_common.sh@940 -- # kill -0 1611817 00:05:43.285 05:20:46 -- common/autotest_common.sh@941 -- # uname 00:05:43.285 05:20:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:43.285 05:20:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1611817 00:05:43.285 05:20:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:43.285 05:20:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:43.285 05:20:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1611817' 00:05:43.285 killing process with pid 1611817 00:05:43.285 05:20:46 -- common/autotest_common.sh@955 -- # kill 1611817 00:05:43.285 05:20:46 -- common/autotest_common.sh@960 -- # wait 1611817 00:05:43.285 00:05:43.285 real 0m2.300s 00:05:43.285 user 0m2.557s 00:05:43.285 sys 0m0.642s 00:05:43.285 05:20:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:43.285 05:20:46 -- common/autotest_common.sh@10 -- # set +x 00:05:43.285 ************************************ 00:05:43.285 END TEST locking_app_on_locked_coremask 00:05:43.285 ************************************ 00:05:43.553 05:20:46 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:43.553 05:20:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:43.553 05:20:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.553 05:20:46 -- common/autotest_common.sh@10 -- # set +x 00:05:43.553 ************************************ 00:05:43.553 START TEST locking_overlapped_coremask 00:05:43.553 ************************************ 00:05:43.553 05:20:46 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:05:43.553 05:20:46 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1612217 00:05:43.553 05:20:46 -- event/cpu_locks.sh@133 -- # waitforlisten 1612217 /var/tmp/spdk.sock 00:05:43.553 05:20:46 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:43.553 05:20:46 -- common/autotest_common.sh@829 -- # '[' -z 1612217 ']' 00:05:43.553 05:20:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.553 05:20:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.553 05:20:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.553 05:20:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.553 05:20:46 -- common/autotest_common.sh@10 -- # set +x 00:05:43.553 [2024-12-07 05:20:46.615697] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:43.553 [2024-12-07 05:20:46.615763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1612217 ] 00:05:43.553 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.553 [2024-12-07 05:20:46.679075] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:43.553 [2024-12-07 05:20:46.746929] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:43.553 [2024-12-07 05:20:46.747114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.553 [2024-12-07 05:20:46.747231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.554 [2024-12-07 05:20:46.747232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.197 05:20:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.197 05:20:47 -- common/autotest_common.sh@862 -- # return 0 00:05:44.197 05:20:47 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1612537 00:05:44.197 05:20:47 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1612537 /var/tmp/spdk2.sock 00:05:44.197 05:20:47 -- common/autotest_common.sh@650 -- # local es=0 00:05:44.197 05:20:47 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:44.197 05:20:47 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1612537 /var/tmp/spdk2.sock 00:05:44.197 05:20:47 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:44.197 05:20:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:44.197 05:20:47 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:44.197 05:20:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:44.197 05:20:47 -- common/autotest_common.sh@653 -- # waitforlisten 1612537 /var/tmp/spdk2.sock 00:05:44.197 05:20:47 -- common/autotest_common.sh@829 -- # '[' -z 1612537 ']' 00:05:44.197 05:20:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.197 05:20:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.197 05:20:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.197 05:20:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.197 05:20:47 -- common/autotest_common.sh@10 -- # set +x 00:05:44.458 [2024-12-07 05:20:47.442151] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:44.458 [2024-12-07 05:20:47.442202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1612537 ] 00:05:44.458 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.458 [2024-12-07 05:20:47.516072] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1612217 has claimed it. 00:05:44.458 [2024-12-07 05:20:47.516104] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:45.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1612537) - No such process 00:05:45.030 ERROR: process (pid: 1612537) is no longer running 00:05:45.030 05:20:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.030 05:20:48 -- common/autotest_common.sh@862 -- # return 1 00:05:45.030 05:20:48 -- common/autotest_common.sh@653 -- # es=1 00:05:45.030 05:20:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:45.030 05:20:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:45.030 05:20:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:45.030 05:20:48 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:45.030 05:20:48 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:45.030 05:20:48 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:45.030 05:20:48 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:45.030 05:20:48 -- event/cpu_locks.sh@141 -- # killprocess 1612217 00:05:45.030 05:20:48 -- common/autotest_common.sh@936 -- # '[' -z 1612217 ']' 00:05:45.030 05:20:48 -- common/autotest_common.sh@940 -- # kill -0 1612217 00:05:45.030 05:20:48 -- common/autotest_common.sh@941 -- # uname 00:05:45.030 05:20:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:45.030 05:20:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1612217 00:05:45.030 05:20:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:45.030 05:20:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:45.030 05:20:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1612217' 00:05:45.030 killing process with pid 1612217 00:05:45.030 05:20:48 -- common/autotest_common.sh@955 -- # kill 1612217 00:05:45.030 05:20:48 -- common/autotest_common.sh@960 -- # wait 1612217 00:05:45.291 00:05:45.291 real 0m1.778s 00:05:45.291 user 0m5.053s 00:05:45.291 sys 0m0.362s 00:05:45.291 05:20:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:45.291 05:20:48 -- common/autotest_common.sh@10 -- # set +x 00:05:45.291 ************************************ 00:05:45.291 END TEST locking_overlapped_coremask 00:05:45.291 ************************************ 00:05:45.291 05:20:48 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:45.291 05:20:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.291 05:20:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.291 05:20:48 -- common/autotest_common.sh@10 -- # set +x 00:05:45.291 ************************************ 00:05:45.291 START TEST locking_overlapped_coremask_via_rpc 00:05:45.291 ************************************ 00:05:45.291 05:20:48 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:05:45.291 05:20:48 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1612649 00:05:45.291 05:20:48 -- event/cpu_locks.sh@149 -- # waitforlisten 1612649 /var/tmp/spdk.sock 00:05:45.291 05:20:48 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:45.291 05:20:48 -- common/autotest_common.sh@829 -- # '[' -z 1612649 ']' 00:05:45.291 05:20:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.291 05:20:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.291 05:20:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.291 05:20:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.291 05:20:48 -- common/autotest_common.sh@10 -- # set +x 00:05:45.291 [2024-12-07 05:20:48.432412] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:45.291 [2024-12-07 05:20:48.432478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1612649 ] 00:05:45.291 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.291 [2024-12-07 05:20:48.496717] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:45.291 [2024-12-07 05:20:48.496754] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:45.553 [2024-12-07 05:20:48.566496] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:45.553 [2024-12-07 05:20:48.566763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.553 [2024-12-07 05:20:48.566878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.553 [2024-12-07 05:20:48.566880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.124 05:20:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.124 05:20:49 -- common/autotest_common.sh@862 -- # return 0 00:05:46.124 05:20:49 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1612918 00:05:46.124 05:20:49 -- event/cpu_locks.sh@153 -- # waitforlisten 1612918 /var/tmp/spdk2.sock 00:05:46.124 05:20:49 -- common/autotest_common.sh@829 -- # '[' -z 1612918 ']' 00:05:46.124 05:20:49 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:46.124 05:20:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:46.124 05:20:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.124 05:20:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:46.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:46.124 05:20:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.124 05:20:49 -- common/autotest_common.sh@10 -- # set +x 00:05:46.124 [2024-12-07 05:20:49.265056] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:46.124 [2024-12-07 05:20:49.265107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1612918 ] 00:05:46.124 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.124 [2024-12-07 05:20:49.336534] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:46.124 [2024-12-07 05:20:49.336556] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:46.385 [2024-12-07 05:20:49.440379] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:46.385 [2024-12-07 05:20:49.440612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:46.385 [2024-12-07 05:20:49.444130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.385 [2024-12-07 05:20:49.444133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:46.956 05:20:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.956 05:20:50 -- common/autotest_common.sh@862 -- # return 0 00:05:46.956 05:20:50 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:46.956 05:20:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.956 05:20:50 -- common/autotest_common.sh@10 -- # set +x 00:05:46.956 05:20:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.956 05:20:50 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:46.956 05:20:50 -- common/autotest_common.sh@650 -- # local es=0 00:05:46.956 05:20:50 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:46.956 05:20:50 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:46.956 05:20:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.956 05:20:50 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:46.956 05:20:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.956 05:20:50 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:46.956 05:20:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.956 05:20:50 -- common/autotest_common.sh@10 -- # set +x 00:05:46.956 [2024-12-07 05:20:50.056072] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1612649 has claimed it. 00:05:46.956 request: 00:05:46.956 { 00:05:46.956 "method": "framework_enable_cpumask_locks", 00:05:46.956 "req_id": 1 00:05:46.956 } 00:05:46.956 Got JSON-RPC error response 00:05:46.956 response: 00:05:46.956 { 00:05:46.956 "code": -32603, 00:05:46.956 "message": "Failed to claim CPU core: 2" 00:05:46.957 } 00:05:46.957 05:20:50 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:46.957 05:20:50 -- common/autotest_common.sh@653 -- # es=1 00:05:46.957 05:20:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:46.957 05:20:50 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:46.957 05:20:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:46.957 05:20:50 -- event/cpu_locks.sh@158 -- # waitforlisten 1612649 /var/tmp/spdk.sock 00:05:46.957 05:20:50 -- common/autotest_common.sh@829 -- # '[' -z 1612649 ']' 00:05:46.957 05:20:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.957 05:20:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.957 05:20:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.957 05:20:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.957 05:20:50 -- common/autotest_common.sh@10 -- # set +x 00:05:47.218 05:20:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.218 05:20:50 -- common/autotest_common.sh@862 -- # return 0 00:05:47.218 05:20:50 -- event/cpu_locks.sh@159 -- # waitforlisten 1612918 /var/tmp/spdk2.sock 00:05:47.218 05:20:50 -- common/autotest_common.sh@829 -- # '[' -z 1612918 ']' 00:05:47.218 05:20:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.218 05:20:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.218 05:20:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.218 05:20:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.218 05:20:50 -- common/autotest_common.sh@10 -- # set +x 00:05:47.218 05:20:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.218 05:20:50 -- common/autotest_common.sh@862 -- # return 0 00:05:47.218 05:20:50 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:47.218 05:20:50 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:47.218 05:20:50 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:47.218 05:20:50 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:47.218 00:05:47.218 real 0m2.036s 00:05:47.218 user 0m0.811s 00:05:47.218 sys 0m0.145s 00:05:47.218 05:20:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:47.218 05:20:50 -- common/autotest_common.sh@10 -- # set +x 00:05:47.218 ************************************ 00:05:47.218 END TEST locking_overlapped_coremask_via_rpc 00:05:47.218 ************************************ 00:05:47.218 05:20:50 -- event/cpu_locks.sh@174 -- # cleanup 00:05:47.218 05:20:50 -- event/cpu_locks.sh@15 -- # [[ -z 1612649 ]] 00:05:47.218 05:20:50 -- event/cpu_locks.sh@15 -- # killprocess 1612649 00:05:47.218 05:20:50 -- common/autotest_common.sh@936 -- # '[' -z 1612649 ']' 00:05:47.218 05:20:50 -- common/autotest_common.sh@940 -- # kill -0 1612649 00:05:47.218 05:20:50 -- common/autotest_common.sh@941 -- # uname 00:05:47.480 05:20:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:47.480 05:20:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1612649 00:05:47.480 05:20:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:47.480 05:20:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:47.480 05:20:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1612649' 00:05:47.480 killing process with pid 1612649 00:05:47.480 05:20:50 -- common/autotest_common.sh@955 -- # kill 1612649 00:05:47.480 05:20:50 -- common/autotest_common.sh@960 -- # wait 1612649 00:05:47.741 05:20:50 -- event/cpu_locks.sh@16 -- # [[ -z 1612918 ]] 00:05:47.741 05:20:50 -- event/cpu_locks.sh@16 -- # killprocess 1612918 00:05:47.741 05:20:50 -- common/autotest_common.sh@936 -- # '[' -z 1612918 ']' 00:05:47.741 05:20:50 -- common/autotest_common.sh@940 -- # kill -0 1612918 00:05:47.741 05:20:50 -- common/autotest_common.sh@941 -- # uname 00:05:47.741 05:20:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:47.741 05:20:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1612918 00:05:47.741 05:20:50 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:47.741 05:20:50 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:47.741 05:20:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1612918' 00:05:47.741 killing process with pid 1612918 00:05:47.741 05:20:50 -- common/autotest_common.sh@955 -- # kill 1612918 00:05:47.741 05:20:50 -- common/autotest_common.sh@960 -- # wait 1612918 00:05:48.002 05:20:50 -- event/cpu_locks.sh@18 -- # rm -f 00:05:48.002 05:20:50 -- event/cpu_locks.sh@1 -- # cleanup 00:05:48.002 05:20:50 -- event/cpu_locks.sh@15 -- # [[ -z 1612649 ]] 00:05:48.002 05:20:50 -- event/cpu_locks.sh@15 -- # killprocess 1612649 00:05:48.002 05:20:50 -- common/autotest_common.sh@936 -- # '[' -z 1612649 ']' 00:05:48.002 05:20:50 -- common/autotest_common.sh@940 -- # kill -0 1612649 00:05:48.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1612649) - No such process 00:05:48.003 05:20:50 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1612649 is not found' 00:05:48.003 Process with pid 1612649 is not found 00:05:48.003 05:20:50 -- event/cpu_locks.sh@16 -- # [[ -z 1612918 ]] 00:05:48.003 05:20:50 -- event/cpu_locks.sh@16 -- # killprocess 1612918 00:05:48.003 05:20:50 -- common/autotest_common.sh@936 -- # '[' -z 1612918 ']' 00:05:48.003 05:20:50 -- common/autotest_common.sh@940 -- # kill -0 1612918 00:05:48.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1612918) - No such process 00:05:48.003 05:20:50 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1612918 is not found' 00:05:48.003 Process with pid 1612918 is not found 00:05:48.003 05:20:50 -- event/cpu_locks.sh@18 -- # rm -f 00:05:48.003 00:05:48.003 real 0m15.935s 00:05:48.003 user 0m27.756s 00:05:48.003 sys 0m4.710s 00:05:48.003 05:20:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:48.003 05:20:50 -- common/autotest_common.sh@10 -- # set +x 00:05:48.003 ************************************ 00:05:48.003 END TEST cpu_locks 00:05:48.003 ************************************ 00:05:48.003 00:05:48.003 real 0m42.256s 00:05:48.003 user 1m23.695s 00:05:48.003 sys 0m7.844s 00:05:48.003 05:20:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:48.003 05:20:51 -- common/autotest_common.sh@10 -- # set +x 00:05:48.003 ************************************ 00:05:48.003 END TEST event 00:05:48.003 ************************************ 00:05:48.003 05:20:51 -- spdk/autotest.sh@175 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:48.003 05:20:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:48.003 05:20:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.003 05:20:51 -- common/autotest_common.sh@10 -- # set +x 00:05:48.003 ************************************ 00:05:48.003 START TEST thread 00:05:48.003 ************************************ 00:05:48.003 05:20:51 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:48.003 * Looking for test storage... 00:05:48.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:48.003 05:20:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:48.003 05:20:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:48.003 05:20:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:48.264 05:20:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:48.264 05:20:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:48.264 05:20:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:48.264 05:20:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:48.264 05:20:51 -- scripts/common.sh@335 -- # IFS=.-: 00:05:48.264 05:20:51 -- scripts/common.sh@335 -- # read -ra ver1 00:05:48.264 05:20:51 -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.264 05:20:51 -- scripts/common.sh@336 -- # read -ra ver2 00:05:48.264 05:20:51 -- scripts/common.sh@337 -- # local 'op=<' 00:05:48.264 05:20:51 -- scripts/common.sh@339 -- # ver1_l=2 00:05:48.264 05:20:51 -- scripts/common.sh@340 -- # ver2_l=1 00:05:48.264 05:20:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:48.264 05:20:51 -- scripts/common.sh@343 -- # case "$op" in 00:05:48.264 05:20:51 -- scripts/common.sh@344 -- # : 1 00:05:48.264 05:20:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:48.264 05:20:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.264 05:20:51 -- scripts/common.sh@364 -- # decimal 1 00:05:48.264 05:20:51 -- scripts/common.sh@352 -- # local d=1 00:05:48.264 05:20:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.264 05:20:51 -- scripts/common.sh@354 -- # echo 1 00:05:48.264 05:20:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:48.264 05:20:51 -- scripts/common.sh@365 -- # decimal 2 00:05:48.264 05:20:51 -- scripts/common.sh@352 -- # local d=2 00:05:48.264 05:20:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.264 05:20:51 -- scripts/common.sh@354 -- # echo 2 00:05:48.264 05:20:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:48.264 05:20:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:48.264 05:20:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:48.264 05:20:51 -- scripts/common.sh@367 -- # return 0 00:05:48.264 05:20:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.265 05:20:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:48.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.265 --rc genhtml_branch_coverage=1 00:05:48.265 --rc genhtml_function_coverage=1 00:05:48.265 --rc genhtml_legend=1 00:05:48.265 --rc geninfo_all_blocks=1 00:05:48.265 --rc geninfo_unexecuted_blocks=1 00:05:48.265 00:05:48.265 ' 00:05:48.265 05:20:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:48.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.265 --rc genhtml_branch_coverage=1 00:05:48.265 --rc genhtml_function_coverage=1 00:05:48.265 --rc genhtml_legend=1 00:05:48.265 --rc geninfo_all_blocks=1 00:05:48.265 --rc geninfo_unexecuted_blocks=1 00:05:48.265 00:05:48.265 ' 00:05:48.265 05:20:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:48.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.265 --rc genhtml_branch_coverage=1 00:05:48.265 --rc genhtml_function_coverage=1 00:05:48.265 --rc genhtml_legend=1 00:05:48.265 --rc geninfo_all_blocks=1 00:05:48.265 --rc geninfo_unexecuted_blocks=1 00:05:48.265 00:05:48.265 ' 00:05:48.265 05:20:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:48.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.265 --rc genhtml_branch_coverage=1 00:05:48.265 --rc genhtml_function_coverage=1 00:05:48.265 --rc genhtml_legend=1 00:05:48.265 --rc geninfo_all_blocks=1 00:05:48.265 --rc geninfo_unexecuted_blocks=1 00:05:48.265 00:05:48.265 ' 00:05:48.265 05:20:51 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:48.265 05:20:51 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:48.265 05:20:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.265 05:20:51 -- common/autotest_common.sh@10 -- # set +x 00:05:48.265 ************************************ 00:05:48.265 START TEST thread_poller_perf 00:05:48.265 ************************************ 00:05:48.265 05:20:51 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:48.265 [2024-12-07 05:20:51.300236] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:48.265 [2024-12-07 05:20:51.300352] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1613362 ] 00:05:48.265 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.265 [2024-12-07 05:20:51.369582] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.265 [2024-12-07 05:20:51.440919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.265 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:49.649 [2024-12-07T04:20:52.889Z] ====================================== 00:05:49.649 [2024-12-07T04:20:52.889Z] busy:2415317294 (cyc) 00:05:49.649 [2024-12-07T04:20:52.889Z] total_run_count: 276000 00:05:49.649 [2024-12-07T04:20:52.889Z] tsc_hz: 2400000000 (cyc) 00:05:49.649 [2024-12-07T04:20:52.889Z] ====================================== 00:05:49.649 [2024-12-07T04:20:52.889Z] poller_cost: 8751 (cyc), 3646 (nsec) 00:05:49.649 00:05:49.649 real 0m1.227s 00:05:49.649 user 0m1.144s 00:05:49.649 sys 0m0.079s 00:05:49.649 05:20:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:49.649 05:20:52 -- common/autotest_common.sh@10 -- # set +x 00:05:49.649 ************************************ 00:05:49.649 END TEST thread_poller_perf 00:05:49.649 ************************************ 00:05:49.649 05:20:52 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:49.649 05:20:52 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:49.649 05:20:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.649 05:20:52 -- common/autotest_common.sh@10 -- # set +x 00:05:49.649 ************************************ 00:05:49.649 START TEST thread_poller_perf 00:05:49.649 ************************************ 00:05:49.649 05:20:52 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:49.649 [2024-12-07 05:20:52.558127] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:49.649 [2024-12-07 05:20:52.558184] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1613712 ] 00:05:49.649 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.649 [2024-12-07 05:20:52.617017] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.649 [2024-12-07 05:20:52.679128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.649 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:50.590 [2024-12-07T04:20:53.830Z] ====================================== 00:05:50.590 [2024-12-07T04:20:53.830Z] busy:2402665132 (cyc) 00:05:50.590 [2024-12-07T04:20:53.830Z] total_run_count: 3809000 00:05:50.590 [2024-12-07T04:20:53.830Z] tsc_hz: 2400000000 (cyc) 00:05:50.590 [2024-12-07T04:20:53.830Z] ====================================== 00:05:50.590 [2024-12-07T04:20:53.830Z] poller_cost: 630 (cyc), 262 (nsec) 00:05:50.590 00:05:50.590 real 0m1.183s 00:05:50.590 user 0m1.118s 00:05:50.590 sys 0m0.061s 00:05:50.590 05:20:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:50.590 05:20:53 -- common/autotest_common.sh@10 -- # set +x 00:05:50.590 ************************************ 00:05:50.590 END TEST thread_poller_perf 00:05:50.590 ************************************ 00:05:50.590 05:20:53 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:50.590 00:05:50.590 real 0m2.687s 00:05:50.590 user 0m2.396s 00:05:50.590 sys 0m0.307s 00:05:50.590 05:20:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:50.590 05:20:53 -- common/autotest_common.sh@10 -- # set +x 00:05:50.590 ************************************ 00:05:50.590 END TEST thread 00:05:50.590 ************************************ 00:05:50.590 05:20:53 -- spdk/autotest.sh@176 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:50.590 05:20:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:50.591 05:20:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.591 05:20:53 -- common/autotest_common.sh@10 -- # set +x 00:05:50.591 ************************************ 00:05:50.591 START TEST accel 00:05:50.591 ************************************ 00:05:50.591 05:20:53 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:50.852 * Looking for test storage... 00:05:50.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:50.852 05:20:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:50.852 05:20:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:50.852 05:20:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:50.852 05:20:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:50.852 05:20:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:50.852 05:20:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:50.852 05:20:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:50.852 05:20:53 -- scripts/common.sh@335 -- # IFS=.-: 00:05:50.852 05:20:53 -- scripts/common.sh@335 -- # read -ra ver1 00:05:50.852 05:20:53 -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.852 05:20:53 -- scripts/common.sh@336 -- # read -ra ver2 00:05:50.852 05:20:53 -- scripts/common.sh@337 -- # local 'op=<' 00:05:50.852 05:20:53 -- scripts/common.sh@339 -- # ver1_l=2 00:05:50.852 05:20:53 -- scripts/common.sh@340 -- # ver2_l=1 00:05:50.852 05:20:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:50.852 05:20:53 -- scripts/common.sh@343 -- # case "$op" in 00:05:50.852 05:20:53 -- scripts/common.sh@344 -- # : 1 00:05:50.852 05:20:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:50.852 05:20:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.852 05:20:53 -- scripts/common.sh@364 -- # decimal 1 00:05:50.852 05:20:53 -- scripts/common.sh@352 -- # local d=1 00:05:50.852 05:20:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.852 05:20:53 -- scripts/common.sh@354 -- # echo 1 00:05:50.852 05:20:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:50.852 05:20:53 -- scripts/common.sh@365 -- # decimal 2 00:05:50.852 05:20:53 -- scripts/common.sh@352 -- # local d=2 00:05:50.852 05:20:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.852 05:20:53 -- scripts/common.sh@354 -- # echo 2 00:05:50.852 05:20:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:50.852 05:20:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:50.852 05:20:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:50.852 05:20:54 -- scripts/common.sh@367 -- # return 0 00:05:50.852 05:20:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.852 05:20:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:50.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.852 --rc genhtml_branch_coverage=1 00:05:50.852 --rc genhtml_function_coverage=1 00:05:50.852 --rc genhtml_legend=1 00:05:50.852 --rc geninfo_all_blocks=1 00:05:50.852 --rc geninfo_unexecuted_blocks=1 00:05:50.852 00:05:50.852 ' 00:05:50.852 05:20:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:50.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.852 --rc genhtml_branch_coverage=1 00:05:50.852 --rc genhtml_function_coverage=1 00:05:50.852 --rc genhtml_legend=1 00:05:50.852 --rc geninfo_all_blocks=1 00:05:50.852 --rc geninfo_unexecuted_blocks=1 00:05:50.852 00:05:50.852 ' 00:05:50.852 05:20:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:50.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.852 --rc genhtml_branch_coverage=1 00:05:50.852 --rc genhtml_function_coverage=1 00:05:50.852 --rc genhtml_legend=1 00:05:50.852 --rc geninfo_all_blocks=1 00:05:50.852 --rc geninfo_unexecuted_blocks=1 00:05:50.852 00:05:50.852 ' 00:05:50.852 05:20:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:50.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.852 --rc genhtml_branch_coverage=1 00:05:50.852 --rc genhtml_function_coverage=1 00:05:50.852 --rc genhtml_legend=1 00:05:50.852 --rc geninfo_all_blocks=1 00:05:50.852 --rc geninfo_unexecuted_blocks=1 00:05:50.852 00:05:50.852 ' 00:05:50.852 05:20:54 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:50.852 05:20:54 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:50.852 05:20:54 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:50.852 05:20:54 -- accel/accel.sh@59 -- # spdk_tgt_pid=1614119 00:05:50.852 05:20:54 -- accel/accel.sh@60 -- # waitforlisten 1614119 00:05:50.852 05:20:54 -- common/autotest_common.sh@829 -- # '[' -z 1614119 ']' 00:05:50.852 05:20:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.852 05:20:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.852 05:20:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.852 05:20:54 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:50.852 05:20:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.852 05:20:54 -- common/autotest_common.sh@10 -- # set +x 00:05:50.852 05:20:54 -- accel/accel.sh@58 -- # build_accel_config 00:05:50.852 05:20:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:50.852 05:20:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.852 05:20:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.852 05:20:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:50.852 05:20:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:50.852 05:20:54 -- accel/accel.sh@41 -- # local IFS=, 00:05:50.852 05:20:54 -- accel/accel.sh@42 -- # jq -r . 00:05:50.852 [2024-12-07 05:20:54.067736] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:50.852 [2024-12-07 05:20:54.067812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1614119 ] 00:05:51.113 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.113 [2024-12-07 05:20:54.133249] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.113 [2024-12-07 05:20:54.205378] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:51.113 [2024-12-07 05:20:54.205517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.684 05:20:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.684 05:20:54 -- common/autotest_common.sh@862 -- # return 0 00:05:51.684 05:20:54 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:51.684 05:20:54 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:51.684 05:20:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.684 05:20:54 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:51.684 05:20:54 -- common/autotest_common.sh@10 -- # set +x 00:05:51.684 05:20:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.684 05:20:54 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.684 05:20:54 -- accel/accel.sh@64 -- # IFS== 00:05:51.684 05:20:54 -- accel/accel.sh@64 -- # read -r opc module 00:05:51.684 05:20:54 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:51.684 05:20:54 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.684 05:20:54 -- accel/accel.sh@64 -- # IFS== 00:05:51.684 05:20:54 -- accel/accel.sh@64 -- # read -r opc module 00:05:51.684 05:20:54 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:51.684 05:20:54 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.684 05:20:54 -- accel/accel.sh@64 -- # IFS== 00:05:51.684 05:20:54 -- accel/accel.sh@64 -- # read -r opc module 00:05:51.684 05:20:54 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:51.684 05:20:54 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.684 05:20:54 -- accel/accel.sh@64 -- # IFS== 00:05:51.684 05:20:54 -- accel/accel.sh@64 -- # read -r opc module 00:05:51.684 05:20:54 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:51.684 05:20:54 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.684 05:20:54 -- accel/accel.sh@64 -- # IFS== 00:05:51.684 05:20:54 -- accel/accel.sh@64 -- # read -r opc module 00:05:51.684 05:20:54 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:51.684 05:20:54 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.684 05:20:54 -- accel/accel.sh@64 -- # IFS== 00:05:51.684 05:20:54 -- accel/accel.sh@64 -- # read -r opc module 00:05:51.684 05:20:54 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:51.684 05:20:54 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.684 05:20:54 -- accel/accel.sh@64 -- # IFS== 00:05:51.684 05:20:54 -- accel/accel.sh@64 -- # read -r opc module 00:05:51.684 05:20:54 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:51.684 05:20:54 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.684 05:20:54 -- accel/accel.sh@64 -- # IFS== 00:05:51.684 05:20:54 -- accel/accel.sh@64 -- # read -r opc module 00:05:51.684 05:20:54 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:51.684 05:20:54 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.684 05:20:54 -- accel/accel.sh@64 -- # IFS== 00:05:51.684 05:20:54 -- accel/accel.sh@64 -- # read -r opc module 00:05:51.684 05:20:54 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:51.684 05:20:54 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.684 05:20:54 -- accel/accel.sh@64 -- # IFS== 00:05:51.684 05:20:54 -- accel/accel.sh@64 -- # read -r opc module 00:05:51.684 05:20:54 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:51.684 05:20:54 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.684 05:20:54 -- accel/accel.sh@64 -- # IFS== 00:05:51.684 05:20:54 -- accel/accel.sh@64 -- # read -r opc module 00:05:51.684 05:20:54 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:51.684 05:20:54 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.684 05:20:54 -- accel/accel.sh@64 -- # IFS== 00:05:51.684 05:20:54 -- accel/accel.sh@64 -- # read -r opc module 00:05:51.684 05:20:54 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:51.684 05:20:54 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.684 05:20:54 -- accel/accel.sh@64 -- # IFS== 00:05:51.684 05:20:54 -- accel/accel.sh@64 -- # read -r opc module 00:05:51.684 05:20:54 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:51.684 05:20:54 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.684 05:20:54 -- accel/accel.sh@64 -- # IFS== 00:05:51.684 05:20:54 -- accel/accel.sh@64 -- # read -r opc module 00:05:51.684 05:20:54 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:51.684 05:20:54 -- accel/accel.sh@67 -- # killprocess 1614119 00:05:51.684 05:20:54 -- common/autotest_common.sh@936 -- # '[' -z 1614119 ']' 00:05:51.684 05:20:54 -- common/autotest_common.sh@940 -- # kill -0 1614119 00:05:51.684 05:20:54 -- common/autotest_common.sh@941 -- # uname 00:05:51.684 05:20:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:51.684 05:20:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1614119 00:05:51.944 05:20:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:51.944 05:20:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:51.944 05:20:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1614119' 00:05:51.944 killing process with pid 1614119 00:05:51.944 05:20:54 -- common/autotest_common.sh@955 -- # kill 1614119 00:05:51.944 05:20:54 -- common/autotest_common.sh@960 -- # wait 1614119 00:05:51.944 05:20:55 -- accel/accel.sh@68 -- # trap - ERR 00:05:51.944 05:20:55 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:05:51.944 05:20:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:51.944 05:20:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.944 05:20:55 -- common/autotest_common.sh@10 -- # set +x 00:05:51.944 05:20:55 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:05:51.944 05:20:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:51.944 05:20:55 -- accel/accel.sh@12 -- # build_accel_config 00:05:51.944 05:20:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:51.944 05:20:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.944 05:20:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.945 05:20:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:51.945 05:20:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:51.945 05:20:55 -- accel/accel.sh@41 -- # local IFS=, 00:05:51.945 05:20:55 -- accel/accel.sh@42 -- # jq -r . 00:05:52.204 05:20:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:52.204 05:20:55 -- common/autotest_common.sh@10 -- # set +x 00:05:52.204 05:20:55 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:52.204 05:20:55 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:52.204 05:20:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.204 05:20:55 -- common/autotest_common.sh@10 -- # set +x 00:05:52.204 ************************************ 00:05:52.204 START TEST accel_missing_filename 00:05:52.204 ************************************ 00:05:52.204 05:20:55 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:05:52.204 05:20:55 -- common/autotest_common.sh@650 -- # local es=0 00:05:52.204 05:20:55 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:52.204 05:20:55 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:52.204 05:20:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.204 05:20:55 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:52.204 05:20:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.204 05:20:55 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:05:52.204 05:20:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:52.204 05:20:55 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.204 05:20:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:52.204 05:20:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.204 05:20:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.204 05:20:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:52.204 05:20:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:52.204 05:20:55 -- accel/accel.sh@41 -- # local IFS=, 00:05:52.204 05:20:55 -- accel/accel.sh@42 -- # jq -r . 00:05:52.204 [2024-12-07 05:20:55.273246] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:52.204 [2024-12-07 05:20:55.273351] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1614293 ] 00:05:52.204 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.204 [2024-12-07 05:20:55.340331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.204 [2024-12-07 05:20:55.408700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.204 [2024-12-07 05:20:55.440717] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:52.464 [2024-12-07 05:20:55.477739] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:52.464 A filename is required. 00:05:52.464 05:20:55 -- common/autotest_common.sh@653 -- # es=234 00:05:52.464 05:20:55 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:52.464 05:20:55 -- common/autotest_common.sh@662 -- # es=106 00:05:52.464 05:20:55 -- common/autotest_common.sh@663 -- # case "$es" in 00:05:52.464 05:20:55 -- common/autotest_common.sh@670 -- # es=1 00:05:52.464 05:20:55 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:52.464 00:05:52.464 real 0m0.289s 00:05:52.464 user 0m0.222s 00:05:52.464 sys 0m0.110s 00:05:52.464 05:20:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:52.464 05:20:55 -- common/autotest_common.sh@10 -- # set +x 00:05:52.464 ************************************ 00:05:52.464 END TEST accel_missing_filename 00:05:52.464 ************************************ 00:05:52.464 05:20:55 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:52.464 05:20:55 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:52.464 05:20:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.464 05:20:55 -- common/autotest_common.sh@10 -- # set +x 00:05:52.464 ************************************ 00:05:52.464 START TEST accel_compress_verify 00:05:52.464 ************************************ 00:05:52.464 05:20:55 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:52.464 05:20:55 -- common/autotest_common.sh@650 -- # local es=0 00:05:52.464 05:20:55 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:52.464 05:20:55 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:52.464 05:20:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.464 05:20:55 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:52.464 05:20:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.464 05:20:55 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:52.464 05:20:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:52.464 05:20:55 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.464 05:20:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:52.464 05:20:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.464 05:20:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.464 05:20:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:52.464 05:20:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:52.464 05:20:55 -- accel/accel.sh@41 -- # local IFS=, 00:05:52.464 05:20:55 -- accel/accel.sh@42 -- # jq -r . 00:05:52.464 [2024-12-07 05:20:55.604827] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:52.464 [2024-12-07 05:20:55.604907] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1614480 ] 00:05:52.464 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.464 [2024-12-07 05:20:55.667925] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.724 [2024-12-07 05:20:55.732286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.724 [2024-12-07 05:20:55.763995] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:52.724 [2024-12-07 05:20:55.800778] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:52.724 00:05:52.724 Compression does not support the verify option, aborting. 00:05:52.724 05:20:55 -- common/autotest_common.sh@653 -- # es=161 00:05:52.724 05:20:55 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:52.724 05:20:55 -- common/autotest_common.sh@662 -- # es=33 00:05:52.724 05:20:55 -- common/autotest_common.sh@663 -- # case "$es" in 00:05:52.724 05:20:55 -- common/autotest_common.sh@670 -- # es=1 00:05:52.724 05:20:55 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:52.724 00:05:52.724 real 0m0.279s 00:05:52.724 user 0m0.216s 00:05:52.724 sys 0m0.104s 00:05:52.724 05:20:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:52.724 05:20:55 -- common/autotest_common.sh@10 -- # set +x 00:05:52.724 ************************************ 00:05:52.724 END TEST accel_compress_verify 00:05:52.724 ************************************ 00:05:52.724 05:20:55 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:52.724 05:20:55 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:52.724 05:20:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.724 05:20:55 -- common/autotest_common.sh@10 -- # set +x 00:05:52.724 ************************************ 00:05:52.724 START TEST accel_wrong_workload 00:05:52.724 ************************************ 00:05:52.724 05:20:55 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:05:52.724 05:20:55 -- common/autotest_common.sh@650 -- # local es=0 00:05:52.724 05:20:55 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:52.724 05:20:55 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:52.724 05:20:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.724 05:20:55 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:52.724 05:20:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.724 05:20:55 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:05:52.724 05:20:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:52.724 05:20:55 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.724 05:20:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:52.724 05:20:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.724 05:20:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.724 05:20:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:52.724 05:20:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:52.724 05:20:55 -- accel/accel.sh@41 -- # local IFS=, 00:05:52.724 05:20:55 -- accel/accel.sh@42 -- # jq -r . 00:05:52.724 Unsupported workload type: foobar 00:05:52.724 [2024-12-07 05:20:55.924675] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:52.724 accel_perf options: 00:05:52.724 [-h help message] 00:05:52.724 [-q queue depth per core] 00:05:52.724 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:52.724 [-T number of threads per core 00:05:52.724 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:52.724 [-t time in seconds] 00:05:52.724 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:52.724 [ dif_verify, , dif_generate, dif_generate_copy 00:05:52.724 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:52.724 [-l for compress/decompress workloads, name of uncompressed input file 00:05:52.724 [-S for crc32c workload, use this seed value (default 0) 00:05:52.724 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:52.724 [-f for fill workload, use this BYTE value (default 255) 00:05:52.724 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:52.724 [-y verify result if this switch is on] 00:05:52.724 [-a tasks to allocate per core (default: same value as -q)] 00:05:52.724 Can be used to spread operations across a wider range of memory. 00:05:52.724 05:20:55 -- common/autotest_common.sh@653 -- # es=1 00:05:52.724 05:20:55 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:52.725 05:20:55 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:52.725 05:20:55 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:52.725 00:05:52.725 real 0m0.034s 00:05:52.725 user 0m0.018s 00:05:52.725 sys 0m0.016s 00:05:52.725 05:20:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:52.725 05:20:55 -- common/autotest_common.sh@10 -- # set +x 00:05:52.725 ************************************ 00:05:52.725 END TEST accel_wrong_workload 00:05:52.725 ************************************ 00:05:52.725 Error: writing output failed: Broken pipe 00:05:52.984 05:20:55 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:52.984 05:20:55 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:52.984 05:20:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.984 05:20:55 -- common/autotest_common.sh@10 -- # set +x 00:05:52.984 ************************************ 00:05:52.984 START TEST accel_negative_buffers 00:05:52.984 ************************************ 00:05:52.984 05:20:55 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:52.984 05:20:55 -- common/autotest_common.sh@650 -- # local es=0 00:05:52.984 05:20:55 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:52.984 05:20:55 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:52.984 05:20:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.984 05:20:55 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:52.984 05:20:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.984 05:20:55 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:05:52.984 05:20:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:52.984 05:20:55 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.984 05:20:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:52.984 05:20:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.984 05:20:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.984 05:20:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:52.984 05:20:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:52.984 05:20:55 -- accel/accel.sh@41 -- # local IFS=, 00:05:52.984 05:20:55 -- accel/accel.sh@42 -- # jq -r . 00:05:52.984 -x option must be non-negative. 00:05:52.984 [2024-12-07 05:20:56.003459] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:52.984 accel_perf options: 00:05:52.984 [-h help message] 00:05:52.984 [-q queue depth per core] 00:05:52.984 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:52.984 [-T number of threads per core 00:05:52.984 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:52.984 [-t time in seconds] 00:05:52.984 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:52.984 [ dif_verify, , dif_generate, dif_generate_copy 00:05:52.984 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:52.984 [-l for compress/decompress workloads, name of uncompressed input file 00:05:52.984 [-S for crc32c workload, use this seed value (default 0) 00:05:52.984 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:52.984 [-f for fill workload, use this BYTE value (default 255) 00:05:52.984 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:52.984 [-y verify result if this switch is on] 00:05:52.984 [-a tasks to allocate per core (default: same value as -q)] 00:05:52.984 Can be used to spread operations across a wider range of memory. 00:05:52.984 05:20:56 -- common/autotest_common.sh@653 -- # es=1 00:05:52.984 05:20:56 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:52.984 05:20:56 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:52.984 05:20:56 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:52.984 00:05:52.984 real 0m0.036s 00:05:52.984 user 0m0.023s 00:05:52.984 sys 0m0.013s 00:05:52.984 05:20:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:52.984 05:20:56 -- common/autotest_common.sh@10 -- # set +x 00:05:52.984 ************************************ 00:05:52.984 END TEST accel_negative_buffers 00:05:52.984 ************************************ 00:05:52.984 Error: writing output failed: Broken pipe 00:05:52.984 05:20:56 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:52.984 05:20:56 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:52.984 05:20:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.984 05:20:56 -- common/autotest_common.sh@10 -- # set +x 00:05:52.984 ************************************ 00:05:52.984 START TEST accel_crc32c 00:05:52.984 ************************************ 00:05:52.984 05:20:56 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:52.984 05:20:56 -- accel/accel.sh@16 -- # local accel_opc 00:05:52.984 05:20:56 -- accel/accel.sh@17 -- # local accel_module 00:05:52.984 05:20:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:52.984 05:20:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:52.984 05:20:56 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.984 05:20:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:52.984 05:20:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.984 05:20:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.984 05:20:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:52.984 05:20:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:52.984 05:20:56 -- accel/accel.sh@41 -- # local IFS=, 00:05:52.984 05:20:56 -- accel/accel.sh@42 -- # jq -r . 00:05:52.984 [2024-12-07 05:20:56.073546] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:52.984 [2024-12-07 05:20:56.073623] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1614563 ] 00:05:52.984 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.985 [2024-12-07 05:20:56.139041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.985 [2024-12-07 05:20:56.208066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.397 05:20:57 -- accel/accel.sh@18 -- # out=' 00:05:54.397 SPDK Configuration: 00:05:54.397 Core mask: 0x1 00:05:54.397 00:05:54.397 Accel Perf Configuration: 00:05:54.397 Workload Type: crc32c 00:05:54.397 CRC-32C seed: 32 00:05:54.397 Transfer size: 4096 bytes 00:05:54.397 Vector count 1 00:05:54.397 Module: software 00:05:54.397 Queue depth: 32 00:05:54.397 Allocate depth: 32 00:05:54.397 # threads/core: 1 00:05:54.397 Run time: 1 seconds 00:05:54.397 Verify: Yes 00:05:54.397 00:05:54.397 Running for 1 seconds... 00:05:54.397 00:05:54.397 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:54.397 ------------------------------------------------------------------------------------ 00:05:54.397 0,0 445920/s 1741 MiB/s 0 0 00:05:54.397 ==================================================================================== 00:05:54.397 Total 445920/s 1741 MiB/s 0 0' 00:05:54.397 05:20:57 -- accel/accel.sh@20 -- # IFS=: 00:05:54.397 05:20:57 -- accel/accel.sh@20 -- # read -r var val 00:05:54.397 05:20:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:54.397 05:20:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:54.397 05:20:57 -- accel/accel.sh@12 -- # build_accel_config 00:05:54.397 05:20:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:54.397 05:20:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.397 05:20:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.397 05:20:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:54.397 05:20:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:54.397 05:20:57 -- accel/accel.sh@41 -- # local IFS=, 00:05:54.397 05:20:57 -- accel/accel.sh@42 -- # jq -r . 00:05:54.397 [2024-12-07 05:20:57.360565] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:54.397 [2024-12-07 05:20:57.360645] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1614904 ] 00:05:54.397 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.397 [2024-12-07 05:20:57.424277] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.397 [2024-12-07 05:20:57.486330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.397 05:20:57 -- accel/accel.sh@21 -- # val= 00:05:54.397 05:20:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.397 05:20:57 -- accel/accel.sh@20 -- # IFS=: 00:05:54.397 05:20:57 -- accel/accel.sh@20 -- # read -r var val 00:05:54.397 05:20:57 -- accel/accel.sh@21 -- # val= 00:05:54.397 05:20:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.397 05:20:57 -- accel/accel.sh@20 -- # IFS=: 00:05:54.397 05:20:57 -- accel/accel.sh@20 -- # read -r var val 00:05:54.397 05:20:57 -- accel/accel.sh@21 -- # val=0x1 00:05:54.397 05:20:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.397 05:20:57 -- accel/accel.sh@20 -- # IFS=: 00:05:54.397 05:20:57 -- accel/accel.sh@20 -- # read -r var val 00:05:54.397 05:20:57 -- accel/accel.sh@21 -- # val= 00:05:54.397 05:20:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.397 05:20:57 -- accel/accel.sh@20 -- # IFS=: 00:05:54.397 05:20:57 -- accel/accel.sh@20 -- # read -r var val 00:05:54.397 05:20:57 -- accel/accel.sh@21 -- # val= 00:05:54.397 05:20:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.397 05:20:57 -- accel/accel.sh@20 -- # IFS=: 00:05:54.397 05:20:57 -- accel/accel.sh@20 -- # read -r var val 00:05:54.397 05:20:57 -- accel/accel.sh@21 -- # val=crc32c 00:05:54.397 05:20:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.397 05:20:57 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:54.397 05:20:57 -- accel/accel.sh@20 -- # IFS=: 00:05:54.397 05:20:57 -- accel/accel.sh@20 -- # read -r var val 00:05:54.397 05:20:57 -- accel/accel.sh@21 -- # val=32 00:05:54.397 05:20:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.397 05:20:57 -- accel/accel.sh@20 -- # IFS=: 00:05:54.397 05:20:57 -- accel/accel.sh@20 -- # read -r var val 00:05:54.397 05:20:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:54.397 05:20:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.397 05:20:57 -- accel/accel.sh@20 -- # IFS=: 00:05:54.397 05:20:57 -- accel/accel.sh@20 -- # read -r var val 00:05:54.397 05:20:57 -- accel/accel.sh@21 -- # val= 00:05:54.397 05:20:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.397 05:20:57 -- accel/accel.sh@20 -- # IFS=: 00:05:54.397 05:20:57 -- accel/accel.sh@20 -- # read -r var val 00:05:54.397 05:20:57 -- accel/accel.sh@21 -- # val=software 00:05:54.398 05:20:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.398 05:20:57 -- accel/accel.sh@23 -- # accel_module=software 00:05:54.398 05:20:57 -- accel/accel.sh@20 -- # IFS=: 00:05:54.398 05:20:57 -- accel/accel.sh@20 -- # read -r var val 00:05:54.398 05:20:57 -- accel/accel.sh@21 -- # val=32 00:05:54.398 05:20:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.398 05:20:57 -- accel/accel.sh@20 -- # IFS=: 00:05:54.398 05:20:57 -- accel/accel.sh@20 -- # read -r var val 00:05:54.398 05:20:57 -- accel/accel.sh@21 -- # val=32 00:05:54.398 05:20:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.398 05:20:57 -- accel/accel.sh@20 -- # IFS=: 00:05:54.398 05:20:57 -- accel/accel.sh@20 -- # read -r var val 00:05:54.398 05:20:57 -- accel/accel.sh@21 -- # val=1 00:05:54.398 05:20:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.398 05:20:57 -- accel/accel.sh@20 -- # IFS=: 00:05:54.398 05:20:57 -- accel/accel.sh@20 -- # read -r var val 00:05:54.398 05:20:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:54.398 05:20:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.398 05:20:57 -- accel/accel.sh@20 -- # IFS=: 00:05:54.398 05:20:57 -- accel/accel.sh@20 -- # read -r var val 00:05:54.398 05:20:57 -- accel/accel.sh@21 -- # val=Yes 00:05:54.398 05:20:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.398 05:20:57 -- accel/accel.sh@20 -- # IFS=: 00:05:54.398 05:20:57 -- accel/accel.sh@20 -- # read -r var val 00:05:54.398 05:20:57 -- accel/accel.sh@21 -- # val= 00:05:54.398 05:20:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.398 05:20:57 -- accel/accel.sh@20 -- # IFS=: 00:05:54.398 05:20:57 -- accel/accel.sh@20 -- # read -r var val 00:05:54.398 05:20:57 -- accel/accel.sh@21 -- # val= 00:05:54.398 05:20:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.398 05:20:57 -- accel/accel.sh@20 -- # IFS=: 00:05:54.398 05:20:57 -- accel/accel.sh@20 -- # read -r var val 00:05:55.779 05:20:58 -- accel/accel.sh@21 -- # val= 00:05:55.779 05:20:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.779 05:20:58 -- accel/accel.sh@20 -- # IFS=: 00:05:55.779 05:20:58 -- accel/accel.sh@20 -- # read -r var val 00:05:55.779 05:20:58 -- accel/accel.sh@21 -- # val= 00:05:55.779 05:20:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.779 05:20:58 -- accel/accel.sh@20 -- # IFS=: 00:05:55.779 05:20:58 -- accel/accel.sh@20 -- # read -r var val 00:05:55.779 05:20:58 -- accel/accel.sh@21 -- # val= 00:05:55.779 05:20:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.779 05:20:58 -- accel/accel.sh@20 -- # IFS=: 00:05:55.779 05:20:58 -- accel/accel.sh@20 -- # read -r var val 00:05:55.779 05:20:58 -- accel/accel.sh@21 -- # val= 00:05:55.779 05:20:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.779 05:20:58 -- accel/accel.sh@20 -- # IFS=: 00:05:55.779 05:20:58 -- accel/accel.sh@20 -- # read -r var val 00:05:55.779 05:20:58 -- accel/accel.sh@21 -- # val= 00:05:55.779 05:20:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.779 05:20:58 -- accel/accel.sh@20 -- # IFS=: 00:05:55.779 05:20:58 -- accel/accel.sh@20 -- # read -r var val 00:05:55.779 05:20:58 -- accel/accel.sh@21 -- # val= 00:05:55.779 05:20:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.779 05:20:58 -- accel/accel.sh@20 -- # IFS=: 00:05:55.779 05:20:58 -- accel/accel.sh@20 -- # read -r var val 00:05:55.779 05:20:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:55.779 05:20:58 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:55.779 05:20:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.779 00:05:55.779 real 0m2.569s 00:05:55.779 user 0m2.385s 00:05:55.780 sys 0m0.192s 00:05:55.780 05:20:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:55.780 05:20:58 -- common/autotest_common.sh@10 -- # set +x 00:05:55.780 ************************************ 00:05:55.780 END TEST accel_crc32c 00:05:55.780 ************************************ 00:05:55.780 05:20:58 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:55.780 05:20:58 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:55.780 05:20:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.780 05:20:58 -- common/autotest_common.sh@10 -- # set +x 00:05:55.780 ************************************ 00:05:55.780 START TEST accel_crc32c_C2 00:05:55.780 ************************************ 00:05:55.780 05:20:58 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:55.780 05:20:58 -- accel/accel.sh@16 -- # local accel_opc 00:05:55.780 05:20:58 -- accel/accel.sh@17 -- # local accel_module 00:05:55.780 05:20:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:55.780 05:20:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:55.780 05:20:58 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.780 05:20:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:55.780 05:20:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.780 05:20:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.780 05:20:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:55.780 05:20:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:55.780 05:20:58 -- accel/accel.sh@41 -- # local IFS=, 00:05:55.780 05:20:58 -- accel/accel.sh@42 -- # jq -r . 00:05:55.780 [2024-12-07 05:20:58.688870] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:55.780 [2024-12-07 05:20:58.688971] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1615090 ] 00:05:55.780 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.780 [2024-12-07 05:20:58.754975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.780 [2024-12-07 05:20:58.821704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.718 05:20:59 -- accel/accel.sh@18 -- # out=' 00:05:56.718 SPDK Configuration: 00:05:56.718 Core mask: 0x1 00:05:56.718 00:05:56.718 Accel Perf Configuration: 00:05:56.718 Workload Type: crc32c 00:05:56.718 CRC-32C seed: 0 00:05:56.718 Transfer size: 4096 bytes 00:05:56.718 Vector count 2 00:05:56.718 Module: software 00:05:56.718 Queue depth: 32 00:05:56.718 Allocate depth: 32 00:05:56.718 # threads/core: 1 00:05:56.718 Run time: 1 seconds 00:05:56.718 Verify: Yes 00:05:56.718 00:05:56.718 Running for 1 seconds... 00:05:56.718 00:05:56.718 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:56.718 ------------------------------------------------------------------------------------ 00:05:56.718 0,0 374816/s 2928 MiB/s 0 0 00:05:56.718 ==================================================================================== 00:05:56.718 Total 374816/s 1464 MiB/s 0 0' 00:05:56.718 05:20:59 -- accel/accel.sh@20 -- # IFS=: 00:05:56.718 05:20:59 -- accel/accel.sh@20 -- # read -r var val 00:05:56.718 05:20:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:56.718 05:20:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:56.718 05:20:59 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.718 05:20:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:56.718 05:20:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.718 05:20:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.718 05:20:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:56.718 05:20:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:56.718 05:20:59 -- accel/accel.sh@41 -- # local IFS=, 00:05:56.718 05:20:59 -- accel/accel.sh@42 -- # jq -r . 00:05:56.979 [2024-12-07 05:20:59.975459] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:56.979 [2024-12-07 05:20:59.975562] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1615276 ] 00:05:56.979 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.979 [2024-12-07 05:21:00.040558] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.979 [2024-12-07 05:21:00.103969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.979 05:21:00 -- accel/accel.sh@21 -- # val= 00:05:56.979 05:21:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.979 05:21:00 -- accel/accel.sh@20 -- # IFS=: 00:05:56.979 05:21:00 -- accel/accel.sh@20 -- # read -r var val 00:05:56.979 05:21:00 -- accel/accel.sh@21 -- # val= 00:05:56.979 05:21:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.979 05:21:00 -- accel/accel.sh@20 -- # IFS=: 00:05:56.979 05:21:00 -- accel/accel.sh@20 -- # read -r var val 00:05:56.979 05:21:00 -- accel/accel.sh@21 -- # val=0x1 00:05:56.979 05:21:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.979 05:21:00 -- accel/accel.sh@20 -- # IFS=: 00:05:56.979 05:21:00 -- accel/accel.sh@20 -- # read -r var val 00:05:56.979 05:21:00 -- accel/accel.sh@21 -- # val= 00:05:56.979 05:21:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.979 05:21:00 -- accel/accel.sh@20 -- # IFS=: 00:05:56.979 05:21:00 -- accel/accel.sh@20 -- # read -r var val 00:05:56.979 05:21:00 -- accel/accel.sh@21 -- # val= 00:05:56.979 05:21:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.979 05:21:00 -- accel/accel.sh@20 -- # IFS=: 00:05:56.979 05:21:00 -- accel/accel.sh@20 -- # read -r var val 00:05:56.979 05:21:00 -- accel/accel.sh@21 -- # val=crc32c 00:05:56.979 05:21:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.979 05:21:00 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:56.979 05:21:00 -- accel/accel.sh@20 -- # IFS=: 00:05:56.979 05:21:00 -- accel/accel.sh@20 -- # read -r var val 00:05:56.979 05:21:00 -- accel/accel.sh@21 -- # val=0 00:05:56.979 05:21:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.979 05:21:00 -- accel/accel.sh@20 -- # IFS=: 00:05:56.979 05:21:00 -- accel/accel.sh@20 -- # read -r var val 00:05:56.979 05:21:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:56.979 05:21:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.979 05:21:00 -- accel/accel.sh@20 -- # IFS=: 00:05:56.979 05:21:00 -- accel/accel.sh@20 -- # read -r var val 00:05:56.979 05:21:00 -- accel/accel.sh@21 -- # val= 00:05:56.979 05:21:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.979 05:21:00 -- accel/accel.sh@20 -- # IFS=: 00:05:56.979 05:21:00 -- accel/accel.sh@20 -- # read -r var val 00:05:56.980 05:21:00 -- accel/accel.sh@21 -- # val=software 00:05:56.980 05:21:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.980 05:21:00 -- accel/accel.sh@23 -- # accel_module=software 00:05:56.980 05:21:00 -- accel/accel.sh@20 -- # IFS=: 00:05:56.980 05:21:00 -- accel/accel.sh@20 -- # read -r var val 00:05:56.980 05:21:00 -- accel/accel.sh@21 -- # val=32 00:05:56.980 05:21:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.980 05:21:00 -- accel/accel.sh@20 -- # IFS=: 00:05:56.980 05:21:00 -- accel/accel.sh@20 -- # read -r var val 00:05:56.980 05:21:00 -- accel/accel.sh@21 -- # val=32 00:05:56.980 05:21:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.980 05:21:00 -- accel/accel.sh@20 -- # IFS=: 00:05:56.980 05:21:00 -- accel/accel.sh@20 -- # read -r var val 00:05:56.980 05:21:00 -- accel/accel.sh@21 -- # val=1 00:05:56.980 05:21:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.980 05:21:00 -- accel/accel.sh@20 -- # IFS=: 00:05:56.980 05:21:00 -- accel/accel.sh@20 -- # read -r var val 00:05:56.980 05:21:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:56.980 05:21:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.980 05:21:00 -- accel/accel.sh@20 -- # IFS=: 00:05:56.980 05:21:00 -- accel/accel.sh@20 -- # read -r var val 00:05:56.980 05:21:00 -- accel/accel.sh@21 -- # val=Yes 00:05:56.980 05:21:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.980 05:21:00 -- accel/accel.sh@20 -- # IFS=: 00:05:56.980 05:21:00 -- accel/accel.sh@20 -- # read -r var val 00:05:56.980 05:21:00 -- accel/accel.sh@21 -- # val= 00:05:56.980 05:21:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.980 05:21:00 -- accel/accel.sh@20 -- # IFS=: 00:05:56.980 05:21:00 -- accel/accel.sh@20 -- # read -r var val 00:05:56.980 05:21:00 -- accel/accel.sh@21 -- # val= 00:05:56.980 05:21:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.980 05:21:00 -- accel/accel.sh@20 -- # IFS=: 00:05:56.980 05:21:00 -- accel/accel.sh@20 -- # read -r var val 00:05:58.365 05:21:01 -- accel/accel.sh@21 -- # val= 00:05:58.365 05:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.365 05:21:01 -- accel/accel.sh@20 -- # IFS=: 00:05:58.365 05:21:01 -- accel/accel.sh@20 -- # read -r var val 00:05:58.365 05:21:01 -- accel/accel.sh@21 -- # val= 00:05:58.365 05:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.365 05:21:01 -- accel/accel.sh@20 -- # IFS=: 00:05:58.365 05:21:01 -- accel/accel.sh@20 -- # read -r var val 00:05:58.365 05:21:01 -- accel/accel.sh@21 -- # val= 00:05:58.365 05:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.365 05:21:01 -- accel/accel.sh@20 -- # IFS=: 00:05:58.365 05:21:01 -- accel/accel.sh@20 -- # read -r var val 00:05:58.365 05:21:01 -- accel/accel.sh@21 -- # val= 00:05:58.365 05:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.365 05:21:01 -- accel/accel.sh@20 -- # IFS=: 00:05:58.365 05:21:01 -- accel/accel.sh@20 -- # read -r var val 00:05:58.365 05:21:01 -- accel/accel.sh@21 -- # val= 00:05:58.365 05:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.365 05:21:01 -- accel/accel.sh@20 -- # IFS=: 00:05:58.365 05:21:01 -- accel/accel.sh@20 -- # read -r var val 00:05:58.365 05:21:01 -- accel/accel.sh@21 -- # val= 00:05:58.365 05:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.365 05:21:01 -- accel/accel.sh@20 -- # IFS=: 00:05:58.365 05:21:01 -- accel/accel.sh@20 -- # read -r var val 00:05:58.365 05:21:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:58.365 05:21:01 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:58.365 05:21:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.365 00:05:58.365 real 0m2.574s 00:05:58.365 user 0m2.381s 00:05:58.365 sys 0m0.198s 00:05:58.365 05:21:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:58.365 05:21:01 -- common/autotest_common.sh@10 -- # set +x 00:05:58.365 ************************************ 00:05:58.365 END TEST accel_crc32c_C2 00:05:58.365 ************************************ 00:05:58.365 05:21:01 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:58.365 05:21:01 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:58.365 05:21:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.365 05:21:01 -- common/autotest_common.sh@10 -- # set +x 00:05:58.365 ************************************ 00:05:58.365 START TEST accel_copy 00:05:58.365 ************************************ 00:05:58.365 05:21:01 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:05:58.365 05:21:01 -- accel/accel.sh@16 -- # local accel_opc 00:05:58.365 05:21:01 -- accel/accel.sh@17 -- # local accel_module 00:05:58.365 05:21:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:05:58.365 05:21:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:58.365 05:21:01 -- accel/accel.sh@12 -- # build_accel_config 00:05:58.365 05:21:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:58.365 05:21:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.365 05:21:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.365 05:21:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:58.365 05:21:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:58.365 05:21:01 -- accel/accel.sh@41 -- # local IFS=, 00:05:58.365 05:21:01 -- accel/accel.sh@42 -- # jq -r . 00:05:58.365 [2024-12-07 05:21:01.306345] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:58.365 [2024-12-07 05:21:01.306418] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1615703 ] 00:05:58.365 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.365 [2024-12-07 05:21:01.369377] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.365 [2024-12-07 05:21:01.432345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.749 05:21:02 -- accel/accel.sh@18 -- # out=' 00:05:59.749 SPDK Configuration: 00:05:59.749 Core mask: 0x1 00:05:59.749 00:05:59.749 Accel Perf Configuration: 00:05:59.749 Workload Type: copy 00:05:59.749 Transfer size: 4096 bytes 00:05:59.749 Vector count 1 00:05:59.749 Module: software 00:05:59.749 Queue depth: 32 00:05:59.749 Allocate depth: 32 00:05:59.749 # threads/core: 1 00:05:59.749 Run time: 1 seconds 00:05:59.749 Verify: Yes 00:05:59.749 00:05:59.749 Running for 1 seconds... 00:05:59.749 00:05:59.749 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:59.749 ------------------------------------------------------------------------------------ 00:05:59.749 0,0 304992/s 1191 MiB/s 0 0 00:05:59.749 ==================================================================================== 00:05:59.749 Total 304992/s 1191 MiB/s 0 0' 00:05:59.749 05:21:02 -- accel/accel.sh@20 -- # IFS=: 00:05:59.749 05:21:02 -- accel/accel.sh@20 -- # read -r var val 00:05:59.749 05:21:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:59.749 05:21:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:59.749 05:21:02 -- accel/accel.sh@12 -- # build_accel_config 00:05:59.749 05:21:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:59.749 05:21:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.749 05:21:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.749 05:21:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:59.749 05:21:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:59.749 05:21:02 -- accel/accel.sh@41 -- # local IFS=, 00:05:59.749 05:21:02 -- accel/accel.sh@42 -- # jq -r . 00:05:59.749 [2024-12-07 05:21:02.583362] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:59.749 [2024-12-07 05:21:02.583436] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1616073 ] 00:05:59.749 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.749 [2024-12-07 05:21:02.645995] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.750 [2024-12-07 05:21:02.708071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.750 05:21:02 -- accel/accel.sh@21 -- # val= 00:05:59.750 05:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.750 05:21:02 -- accel/accel.sh@20 -- # IFS=: 00:05:59.750 05:21:02 -- accel/accel.sh@20 -- # read -r var val 00:05:59.750 05:21:02 -- accel/accel.sh@21 -- # val= 00:05:59.750 05:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.750 05:21:02 -- accel/accel.sh@20 -- # IFS=: 00:05:59.750 05:21:02 -- accel/accel.sh@20 -- # read -r var val 00:05:59.750 05:21:02 -- accel/accel.sh@21 -- # val=0x1 00:05:59.750 05:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.750 05:21:02 -- accel/accel.sh@20 -- # IFS=: 00:05:59.750 05:21:02 -- accel/accel.sh@20 -- # read -r var val 00:05:59.750 05:21:02 -- accel/accel.sh@21 -- # val= 00:05:59.750 05:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.750 05:21:02 -- accel/accel.sh@20 -- # IFS=: 00:05:59.750 05:21:02 -- accel/accel.sh@20 -- # read -r var val 00:05:59.750 05:21:02 -- accel/accel.sh@21 -- # val= 00:05:59.750 05:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.750 05:21:02 -- accel/accel.sh@20 -- # IFS=: 00:05:59.750 05:21:02 -- accel/accel.sh@20 -- # read -r var val 00:05:59.750 05:21:02 -- accel/accel.sh@21 -- # val=copy 00:05:59.750 05:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.750 05:21:02 -- accel/accel.sh@24 -- # accel_opc=copy 00:05:59.750 05:21:02 -- accel/accel.sh@20 -- # IFS=: 00:05:59.750 05:21:02 -- accel/accel.sh@20 -- # read -r var val 00:05:59.750 05:21:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:59.750 05:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.750 05:21:02 -- accel/accel.sh@20 -- # IFS=: 00:05:59.750 05:21:02 -- accel/accel.sh@20 -- # read -r var val 00:05:59.750 05:21:02 -- accel/accel.sh@21 -- # val= 00:05:59.750 05:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.750 05:21:02 -- accel/accel.sh@20 -- # IFS=: 00:05:59.750 05:21:02 -- accel/accel.sh@20 -- # read -r var val 00:05:59.750 05:21:02 -- accel/accel.sh@21 -- # val=software 00:05:59.750 05:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.750 05:21:02 -- accel/accel.sh@23 -- # accel_module=software 00:05:59.750 05:21:02 -- accel/accel.sh@20 -- # IFS=: 00:05:59.750 05:21:02 -- accel/accel.sh@20 -- # read -r var val 00:05:59.750 05:21:02 -- accel/accel.sh@21 -- # val=32 00:05:59.750 05:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.750 05:21:02 -- accel/accel.sh@20 -- # IFS=: 00:05:59.750 05:21:02 -- accel/accel.sh@20 -- # read -r var val 00:05:59.750 05:21:02 -- accel/accel.sh@21 -- # val=32 00:05:59.750 05:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.750 05:21:02 -- accel/accel.sh@20 -- # IFS=: 00:05:59.750 05:21:02 -- accel/accel.sh@20 -- # read -r var val 00:05:59.750 05:21:02 -- accel/accel.sh@21 -- # val=1 00:05:59.750 05:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.750 05:21:02 -- accel/accel.sh@20 -- # IFS=: 00:05:59.750 05:21:02 -- accel/accel.sh@20 -- # read -r var val 00:05:59.750 05:21:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:59.750 05:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.750 05:21:02 -- accel/accel.sh@20 -- # IFS=: 00:05:59.750 05:21:02 -- accel/accel.sh@20 -- # read -r var val 00:05:59.750 05:21:02 -- accel/accel.sh@21 -- # val=Yes 00:05:59.750 05:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.750 05:21:02 -- accel/accel.sh@20 -- # IFS=: 00:05:59.750 05:21:02 -- accel/accel.sh@20 -- # read -r var val 00:05:59.750 05:21:02 -- accel/accel.sh@21 -- # val= 00:05:59.750 05:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.750 05:21:02 -- accel/accel.sh@20 -- # IFS=: 00:05:59.750 05:21:02 -- accel/accel.sh@20 -- # read -r var val 00:05:59.750 05:21:02 -- accel/accel.sh@21 -- # val= 00:05:59.750 05:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.750 05:21:02 -- accel/accel.sh@20 -- # IFS=: 00:05:59.750 05:21:02 -- accel/accel.sh@20 -- # read -r var val 00:06:00.692 05:21:03 -- accel/accel.sh@21 -- # val= 00:06:00.692 05:21:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.692 05:21:03 -- accel/accel.sh@20 -- # IFS=: 00:06:00.692 05:21:03 -- accel/accel.sh@20 -- # read -r var val 00:06:00.692 05:21:03 -- accel/accel.sh@21 -- # val= 00:06:00.692 05:21:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.692 05:21:03 -- accel/accel.sh@20 -- # IFS=: 00:06:00.692 05:21:03 -- accel/accel.sh@20 -- # read -r var val 00:06:00.692 05:21:03 -- accel/accel.sh@21 -- # val= 00:06:00.692 05:21:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.692 05:21:03 -- accel/accel.sh@20 -- # IFS=: 00:06:00.692 05:21:03 -- accel/accel.sh@20 -- # read -r var val 00:06:00.692 05:21:03 -- accel/accel.sh@21 -- # val= 00:06:00.692 05:21:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.692 05:21:03 -- accel/accel.sh@20 -- # IFS=: 00:06:00.692 05:21:03 -- accel/accel.sh@20 -- # read -r var val 00:06:00.692 05:21:03 -- accel/accel.sh@21 -- # val= 00:06:00.692 05:21:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.692 05:21:03 -- accel/accel.sh@20 -- # IFS=: 00:06:00.692 05:21:03 -- accel/accel.sh@20 -- # read -r var val 00:06:00.692 05:21:03 -- accel/accel.sh@21 -- # val= 00:06:00.692 05:21:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.692 05:21:03 -- accel/accel.sh@20 -- # IFS=: 00:06:00.692 05:21:03 -- accel/accel.sh@20 -- # read -r var val 00:06:00.692 05:21:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:00.692 05:21:03 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:00.692 05:21:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.692 00:06:00.692 real 0m2.560s 00:06:00.692 user 0m2.376s 00:06:00.692 sys 0m0.189s 00:06:00.692 05:21:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:00.692 05:21:03 -- common/autotest_common.sh@10 -- # set +x 00:06:00.692 ************************************ 00:06:00.692 END TEST accel_copy 00:06:00.692 ************************************ 00:06:00.692 05:21:03 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:00.692 05:21:03 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:00.692 05:21:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.692 05:21:03 -- common/autotest_common.sh@10 -- # set +x 00:06:00.692 ************************************ 00:06:00.692 START TEST accel_fill 00:06:00.692 ************************************ 00:06:00.692 05:21:03 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:00.692 05:21:03 -- accel/accel.sh@16 -- # local accel_opc 00:06:00.692 05:21:03 -- accel/accel.sh@17 -- # local accel_module 00:06:00.692 05:21:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:00.692 05:21:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:00.692 05:21:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.692 05:21:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:00.692 05:21:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.692 05:21:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.692 05:21:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:00.692 05:21:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:00.692 05:21:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:00.692 05:21:03 -- accel/accel.sh@42 -- # jq -r . 00:06:00.692 [2024-12-07 05:21:03.910812] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:00.692 [2024-12-07 05:21:03.910920] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1616327 ] 00:06:00.953 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.953 [2024-12-07 05:21:03.976100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.953 [2024-12-07 05:21:04.042692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.338 05:21:05 -- accel/accel.sh@18 -- # out=' 00:06:02.338 SPDK Configuration: 00:06:02.338 Core mask: 0x1 00:06:02.338 00:06:02.338 Accel Perf Configuration: 00:06:02.338 Workload Type: fill 00:06:02.338 Fill pattern: 0x80 00:06:02.338 Transfer size: 4096 bytes 00:06:02.338 Vector count 1 00:06:02.338 Module: software 00:06:02.338 Queue depth: 64 00:06:02.338 Allocate depth: 64 00:06:02.338 # threads/core: 1 00:06:02.338 Run time: 1 seconds 00:06:02.338 Verify: Yes 00:06:02.338 00:06:02.338 Running for 1 seconds... 00:06:02.338 00:06:02.338 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:02.338 ------------------------------------------------------------------------------------ 00:06:02.338 0,0 466432/s 1822 MiB/s 0 0 00:06:02.338 ==================================================================================== 00:06:02.338 Total 466432/s 1822 MiB/s 0 0' 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # IFS=: 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # read -r var val 00:06:02.338 05:21:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:02.338 05:21:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:02.338 05:21:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.338 05:21:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:02.338 05:21:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.338 05:21:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.338 05:21:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:02.338 05:21:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:02.338 05:21:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:02.338 05:21:05 -- accel/accel.sh@42 -- # jq -r . 00:06:02.338 [2024-12-07 05:21:05.193916] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:02.338 [2024-12-07 05:21:05.193993] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1616476 ] 00:06:02.338 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.338 [2024-12-07 05:21:05.256769] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.338 [2024-12-07 05:21:05.320061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.338 05:21:05 -- accel/accel.sh@21 -- # val= 00:06:02.338 05:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # IFS=: 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # read -r var val 00:06:02.338 05:21:05 -- accel/accel.sh@21 -- # val= 00:06:02.338 05:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # IFS=: 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # read -r var val 00:06:02.338 05:21:05 -- accel/accel.sh@21 -- # val=0x1 00:06:02.338 05:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # IFS=: 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # read -r var val 00:06:02.338 05:21:05 -- accel/accel.sh@21 -- # val= 00:06:02.338 05:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # IFS=: 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # read -r var val 00:06:02.338 05:21:05 -- accel/accel.sh@21 -- # val= 00:06:02.338 05:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # IFS=: 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # read -r var val 00:06:02.338 05:21:05 -- accel/accel.sh@21 -- # val=fill 00:06:02.338 05:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.338 05:21:05 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # IFS=: 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # read -r var val 00:06:02.338 05:21:05 -- accel/accel.sh@21 -- # val=0x80 00:06:02.338 05:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # IFS=: 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # read -r var val 00:06:02.338 05:21:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:02.338 05:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # IFS=: 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # read -r var val 00:06:02.338 05:21:05 -- accel/accel.sh@21 -- # val= 00:06:02.338 05:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # IFS=: 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # read -r var val 00:06:02.338 05:21:05 -- accel/accel.sh@21 -- # val=software 00:06:02.338 05:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.338 05:21:05 -- accel/accel.sh@23 -- # accel_module=software 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # IFS=: 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # read -r var val 00:06:02.338 05:21:05 -- accel/accel.sh@21 -- # val=64 00:06:02.338 05:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # IFS=: 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # read -r var val 00:06:02.338 05:21:05 -- accel/accel.sh@21 -- # val=64 00:06:02.338 05:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # IFS=: 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # read -r var val 00:06:02.338 05:21:05 -- accel/accel.sh@21 -- # val=1 00:06:02.338 05:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # IFS=: 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # read -r var val 00:06:02.338 05:21:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:02.338 05:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # IFS=: 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # read -r var val 00:06:02.338 05:21:05 -- accel/accel.sh@21 -- # val=Yes 00:06:02.338 05:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # IFS=: 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # read -r var val 00:06:02.338 05:21:05 -- accel/accel.sh@21 -- # val= 00:06:02.338 05:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # IFS=: 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # read -r var val 00:06:02.338 05:21:05 -- accel/accel.sh@21 -- # val= 00:06:02.338 05:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # IFS=: 00:06:02.338 05:21:05 -- accel/accel.sh@20 -- # read -r var val 00:06:03.277 05:21:06 -- accel/accel.sh@21 -- # val= 00:06:03.277 05:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.277 05:21:06 -- accel/accel.sh@20 -- # IFS=: 00:06:03.277 05:21:06 -- accel/accel.sh@20 -- # read -r var val 00:06:03.277 05:21:06 -- accel/accel.sh@21 -- # val= 00:06:03.277 05:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.277 05:21:06 -- accel/accel.sh@20 -- # IFS=: 00:06:03.277 05:21:06 -- accel/accel.sh@20 -- # read -r var val 00:06:03.277 05:21:06 -- accel/accel.sh@21 -- # val= 00:06:03.277 05:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.277 05:21:06 -- accel/accel.sh@20 -- # IFS=: 00:06:03.277 05:21:06 -- accel/accel.sh@20 -- # read -r var val 00:06:03.277 05:21:06 -- accel/accel.sh@21 -- # val= 00:06:03.277 05:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.277 05:21:06 -- accel/accel.sh@20 -- # IFS=: 00:06:03.277 05:21:06 -- accel/accel.sh@20 -- # read -r var val 00:06:03.277 05:21:06 -- accel/accel.sh@21 -- # val= 00:06:03.277 05:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.277 05:21:06 -- accel/accel.sh@20 -- # IFS=: 00:06:03.277 05:21:06 -- accel/accel.sh@20 -- # read -r var val 00:06:03.277 05:21:06 -- accel/accel.sh@21 -- # val= 00:06:03.277 05:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.277 05:21:06 -- accel/accel.sh@20 -- # IFS=: 00:06:03.277 05:21:06 -- accel/accel.sh@20 -- # read -r var val 00:06:03.277 05:21:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:03.277 05:21:06 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:03.277 05:21:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.277 00:06:03.277 real 0m2.568s 00:06:03.277 user 0m2.367s 00:06:03.277 sys 0m0.207s 00:06:03.277 05:21:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:03.277 05:21:06 -- common/autotest_common.sh@10 -- # set +x 00:06:03.277 ************************************ 00:06:03.277 END TEST accel_fill 00:06:03.277 ************************************ 00:06:03.277 05:21:06 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:03.277 05:21:06 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:03.277 05:21:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.277 05:21:06 -- common/autotest_common.sh@10 -- # set +x 00:06:03.277 ************************************ 00:06:03.277 START TEST accel_copy_crc32c 00:06:03.277 ************************************ 00:06:03.277 05:21:06 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:06:03.277 05:21:06 -- accel/accel.sh@16 -- # local accel_opc 00:06:03.277 05:21:06 -- accel/accel.sh@17 -- # local accel_module 00:06:03.277 05:21:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:03.277 05:21:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:03.277 05:21:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:03.277 05:21:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:03.277 05:21:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.277 05:21:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.277 05:21:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:03.277 05:21:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:03.277 05:21:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:03.277 05:21:06 -- accel/accel.sh@42 -- # jq -r . 00:06:03.537 [2024-12-07 05:21:06.522090] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:03.537 [2024-12-07 05:21:06.522189] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1616800 ] 00:06:03.537 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.537 [2024-12-07 05:21:06.585512] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.537 [2024-12-07 05:21:06.647267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.917 05:21:07 -- accel/accel.sh@18 -- # out=' 00:06:04.917 SPDK Configuration: 00:06:04.917 Core mask: 0x1 00:06:04.917 00:06:04.917 Accel Perf Configuration: 00:06:04.917 Workload Type: copy_crc32c 00:06:04.917 CRC-32C seed: 0 00:06:04.917 Vector size: 4096 bytes 00:06:04.917 Transfer size: 4096 bytes 00:06:04.917 Vector count 1 00:06:04.917 Module: software 00:06:04.917 Queue depth: 32 00:06:04.917 Allocate depth: 32 00:06:04.917 # threads/core: 1 00:06:04.917 Run time: 1 seconds 00:06:04.917 Verify: Yes 00:06:04.917 00:06:04.917 Running for 1 seconds... 00:06:04.917 00:06:04.917 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:04.917 ------------------------------------------------------------------------------------ 00:06:04.917 0,0 248448/s 970 MiB/s 0 0 00:06:04.917 ==================================================================================== 00:06:04.917 Total 248448/s 970 MiB/s 0 0' 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:04.917 05:21:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:04.917 05:21:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:04.917 05:21:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.917 05:21:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:04.917 05:21:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.917 05:21:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.917 05:21:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:04.917 05:21:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:04.917 05:21:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:04.917 05:21:07 -- accel/accel.sh@42 -- # jq -r . 00:06:04.917 [2024-12-07 05:21:07.800900] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:04.917 [2024-12-07 05:21:07.801003] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1617134 ] 00:06:04.917 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.917 [2024-12-07 05:21:07.863708] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.917 [2024-12-07 05:21:07.926115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.917 05:21:07 -- accel/accel.sh@21 -- # val= 00:06:04.917 05:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:04.917 05:21:07 -- accel/accel.sh@21 -- # val= 00:06:04.917 05:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:04.917 05:21:07 -- accel/accel.sh@21 -- # val=0x1 00:06:04.917 05:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:04.917 05:21:07 -- accel/accel.sh@21 -- # val= 00:06:04.917 05:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:04.917 05:21:07 -- accel/accel.sh@21 -- # val= 00:06:04.917 05:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:04.917 05:21:07 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:04.917 05:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.917 05:21:07 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:04.917 05:21:07 -- accel/accel.sh@21 -- # val=0 00:06:04.917 05:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:04.917 05:21:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:04.917 05:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:04.917 05:21:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:04.917 05:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:04.917 05:21:07 -- accel/accel.sh@21 -- # val= 00:06:04.917 05:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:04.917 05:21:07 -- accel/accel.sh@21 -- # val=software 00:06:04.917 05:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.917 05:21:07 -- accel/accel.sh@23 -- # accel_module=software 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:04.917 05:21:07 -- accel/accel.sh@21 -- # val=32 00:06:04.917 05:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:04.917 05:21:07 -- accel/accel.sh@21 -- # val=32 00:06:04.917 05:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:04.917 05:21:07 -- accel/accel.sh@21 -- # val=1 00:06:04.917 05:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:04.917 05:21:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:04.917 05:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:04.917 05:21:07 -- accel/accel.sh@21 -- # val=Yes 00:06:04.917 05:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:04.917 05:21:07 -- accel/accel.sh@21 -- # val= 00:06:04.917 05:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:04.917 05:21:07 -- accel/accel.sh@21 -- # val= 00:06:04.917 05:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.917 05:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:05.857 05:21:09 -- accel/accel.sh@21 -- # val= 00:06:05.857 05:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.857 05:21:09 -- accel/accel.sh@20 -- # IFS=: 00:06:05.857 05:21:09 -- accel/accel.sh@20 -- # read -r var val 00:06:05.858 05:21:09 -- accel/accel.sh@21 -- # val= 00:06:05.858 05:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.858 05:21:09 -- accel/accel.sh@20 -- # IFS=: 00:06:05.858 05:21:09 -- accel/accel.sh@20 -- # read -r var val 00:06:05.858 05:21:09 -- accel/accel.sh@21 -- # val= 00:06:05.858 05:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.858 05:21:09 -- accel/accel.sh@20 -- # IFS=: 00:06:05.858 05:21:09 -- accel/accel.sh@20 -- # read -r var val 00:06:05.858 05:21:09 -- accel/accel.sh@21 -- # val= 00:06:05.858 05:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.858 05:21:09 -- accel/accel.sh@20 -- # IFS=: 00:06:05.858 05:21:09 -- accel/accel.sh@20 -- # read -r var val 00:06:05.858 05:21:09 -- accel/accel.sh@21 -- # val= 00:06:05.858 05:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.858 05:21:09 -- accel/accel.sh@20 -- # IFS=: 00:06:05.858 05:21:09 -- accel/accel.sh@20 -- # read -r var val 00:06:05.858 05:21:09 -- accel/accel.sh@21 -- # val= 00:06:05.858 05:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.858 05:21:09 -- accel/accel.sh@20 -- # IFS=: 00:06:05.858 05:21:09 -- accel/accel.sh@20 -- # read -r var val 00:06:05.858 05:21:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:05.858 05:21:09 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:05.858 05:21:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.858 00:06:05.858 real 0m2.564s 00:06:05.858 user 0m2.359s 00:06:05.858 sys 0m0.210s 00:06:05.858 05:21:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.858 05:21:09 -- common/autotest_common.sh@10 -- # set +x 00:06:05.858 ************************************ 00:06:05.858 END TEST accel_copy_crc32c 00:06:05.858 ************************************ 00:06:05.858 05:21:09 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:05.858 05:21:09 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:05.858 05:21:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.858 05:21:09 -- common/autotest_common.sh@10 -- # set +x 00:06:06.118 ************************************ 00:06:06.118 START TEST accel_copy_crc32c_C2 00:06:06.118 ************************************ 00:06:06.118 05:21:09 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:06.118 05:21:09 -- accel/accel.sh@16 -- # local accel_opc 00:06:06.118 05:21:09 -- accel/accel.sh@17 -- # local accel_module 00:06:06.118 05:21:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:06.118 05:21:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:06.118 05:21:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.118 05:21:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.118 05:21:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.118 05:21:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.118 05:21:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.118 05:21:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.118 05:21:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.118 05:21:09 -- accel/accel.sh@42 -- # jq -r . 00:06:06.118 [2024-12-07 05:21:09.129101] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:06.118 [2024-12-07 05:21:09.129181] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1617708 ] 00:06:06.118 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.118 [2024-12-07 05:21:09.192830] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.118 [2024-12-07 05:21:09.257367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.503 05:21:10 -- accel/accel.sh@18 -- # out=' 00:06:07.503 SPDK Configuration: 00:06:07.503 Core mask: 0x1 00:06:07.503 00:06:07.503 Accel Perf Configuration: 00:06:07.503 Workload Type: copy_crc32c 00:06:07.503 CRC-32C seed: 0 00:06:07.503 Vector size: 4096 bytes 00:06:07.503 Transfer size: 8192 bytes 00:06:07.503 Vector count 2 00:06:07.503 Module: software 00:06:07.503 Queue depth: 32 00:06:07.503 Allocate depth: 32 00:06:07.503 # threads/core: 1 00:06:07.503 Run time: 1 seconds 00:06:07.503 Verify: Yes 00:06:07.503 00:06:07.503 Running for 1 seconds... 00:06:07.503 00:06:07.503 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:07.503 ------------------------------------------------------------------------------------ 00:06:07.503 0,0 187552/s 1465 MiB/s 0 0 00:06:07.503 ==================================================================================== 00:06:07.503 Total 187552/s 732 MiB/s 0 0' 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:07.503 05:21:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:07.503 05:21:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:07.503 05:21:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:07.503 05:21:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:07.503 05:21:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.503 05:21:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.503 05:21:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:07.503 05:21:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:07.503 05:21:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:07.503 05:21:10 -- accel/accel.sh@42 -- # jq -r . 00:06:07.503 [2024-12-07 05:21:10.412562] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:07.503 [2024-12-07 05:21:10.412634] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1618054 ] 00:06:07.503 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.503 [2024-12-07 05:21:10.474829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.503 [2024-12-07 05:21:10.537422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.503 05:21:10 -- accel/accel.sh@21 -- # val= 00:06:07.503 05:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:07.503 05:21:10 -- accel/accel.sh@21 -- # val= 00:06:07.503 05:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:07.503 05:21:10 -- accel/accel.sh@21 -- # val=0x1 00:06:07.503 05:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:07.503 05:21:10 -- accel/accel.sh@21 -- # val= 00:06:07.503 05:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:07.503 05:21:10 -- accel/accel.sh@21 -- # val= 00:06:07.503 05:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:07.503 05:21:10 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:07.503 05:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.503 05:21:10 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:07.503 05:21:10 -- accel/accel.sh@21 -- # val=0 00:06:07.503 05:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:07.503 05:21:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:07.503 05:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:07.503 05:21:10 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:07.503 05:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:07.503 05:21:10 -- accel/accel.sh@21 -- # val= 00:06:07.503 05:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:07.503 05:21:10 -- accel/accel.sh@21 -- # val=software 00:06:07.503 05:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.503 05:21:10 -- accel/accel.sh@23 -- # accel_module=software 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:07.503 05:21:10 -- accel/accel.sh@21 -- # val=32 00:06:07.503 05:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:07.503 05:21:10 -- accel/accel.sh@21 -- # val=32 00:06:07.503 05:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:07.503 05:21:10 -- accel/accel.sh@21 -- # val=1 00:06:07.503 05:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:07.503 05:21:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:07.503 05:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:07.503 05:21:10 -- accel/accel.sh@21 -- # val=Yes 00:06:07.503 05:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:07.503 05:21:10 -- accel/accel.sh@21 -- # val= 00:06:07.503 05:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:07.503 05:21:10 -- accel/accel.sh@21 -- # val= 00:06:07.503 05:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:07.503 05:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:08.444 05:21:11 -- accel/accel.sh@21 -- # val= 00:06:08.444 05:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.444 05:21:11 -- accel/accel.sh@20 -- # IFS=: 00:06:08.444 05:21:11 -- accel/accel.sh@20 -- # read -r var val 00:06:08.445 05:21:11 -- accel/accel.sh@21 -- # val= 00:06:08.445 05:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.445 05:21:11 -- accel/accel.sh@20 -- # IFS=: 00:06:08.445 05:21:11 -- accel/accel.sh@20 -- # read -r var val 00:06:08.445 05:21:11 -- accel/accel.sh@21 -- # val= 00:06:08.445 05:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.445 05:21:11 -- accel/accel.sh@20 -- # IFS=: 00:06:08.445 05:21:11 -- accel/accel.sh@20 -- # read -r var val 00:06:08.445 05:21:11 -- accel/accel.sh@21 -- # val= 00:06:08.445 05:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.445 05:21:11 -- accel/accel.sh@20 -- # IFS=: 00:06:08.445 05:21:11 -- accel/accel.sh@20 -- # read -r var val 00:06:08.445 05:21:11 -- accel/accel.sh@21 -- # val= 00:06:08.445 05:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.445 05:21:11 -- accel/accel.sh@20 -- # IFS=: 00:06:08.445 05:21:11 -- accel/accel.sh@20 -- # read -r var val 00:06:08.445 05:21:11 -- accel/accel.sh@21 -- # val= 00:06:08.445 05:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.445 05:21:11 -- accel/accel.sh@20 -- # IFS=: 00:06:08.445 05:21:11 -- accel/accel.sh@20 -- # read -r var val 00:06:08.445 05:21:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:08.445 05:21:11 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:08.445 05:21:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.445 00:06:08.445 real 0m2.567s 00:06:08.445 user 0m2.375s 00:06:08.445 sys 0m0.199s 00:06:08.445 05:21:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.445 05:21:11 -- common/autotest_common.sh@10 -- # set +x 00:06:08.445 ************************************ 00:06:08.445 END TEST accel_copy_crc32c_C2 00:06:08.445 ************************************ 00:06:08.705 05:21:11 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:08.705 05:21:11 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:08.705 05:21:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.705 05:21:11 -- common/autotest_common.sh@10 -- # set +x 00:06:08.705 ************************************ 00:06:08.705 START TEST accel_dualcast 00:06:08.705 ************************************ 00:06:08.705 05:21:11 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:08.705 05:21:11 -- accel/accel.sh@16 -- # local accel_opc 00:06:08.705 05:21:11 -- accel/accel.sh@17 -- # local accel_module 00:06:08.705 05:21:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:08.705 05:21:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:08.705 05:21:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:08.705 05:21:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:08.705 05:21:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.705 05:21:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.705 05:21:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:08.705 05:21:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:08.705 05:21:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:08.705 05:21:11 -- accel/accel.sh@42 -- # jq -r . 00:06:08.705 [2024-12-07 05:21:11.740838] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:08.705 [2024-12-07 05:21:11.740943] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1618306 ] 00:06:08.705 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.705 [2024-12-07 05:21:11.806921] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.705 [2024-12-07 05:21:11.872554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.091 05:21:12 -- accel/accel.sh@18 -- # out=' 00:06:10.091 SPDK Configuration: 00:06:10.091 Core mask: 0x1 00:06:10.091 00:06:10.091 Accel Perf Configuration: 00:06:10.091 Workload Type: dualcast 00:06:10.091 Transfer size: 4096 bytes 00:06:10.091 Vector count 1 00:06:10.091 Module: software 00:06:10.091 Queue depth: 32 00:06:10.091 Allocate depth: 32 00:06:10.091 # threads/core: 1 00:06:10.091 Run time: 1 seconds 00:06:10.091 Verify: Yes 00:06:10.091 00:06:10.091 Running for 1 seconds... 00:06:10.091 00:06:10.091 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:10.091 ------------------------------------------------------------------------------------ 00:06:10.091 0,0 363744/s 1420 MiB/s 0 0 00:06:10.091 ==================================================================================== 00:06:10.091 Total 363744/s 1420 MiB/s 0 0' 00:06:10.091 05:21:12 -- accel/accel.sh@20 -- # IFS=: 00:06:10.091 05:21:12 -- accel/accel.sh@20 -- # read -r var val 00:06:10.091 05:21:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:10.091 05:21:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:10.091 05:21:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.091 05:21:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.091 05:21:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.091 05:21:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.091 05:21:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.091 05:21:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.091 05:21:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.091 05:21:13 -- accel/accel.sh@42 -- # jq -r . 00:06:10.091 [2024-12-07 05:21:13.023400] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:10.091 [2024-12-07 05:21:13.023484] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1618642 ] 00:06:10.091 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.091 [2024-12-07 05:21:13.086180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.091 [2024-12-07 05:21:13.147692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.091 05:21:13 -- accel/accel.sh@21 -- # val= 00:06:10.091 05:21:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.091 05:21:13 -- accel/accel.sh@20 -- # IFS=: 00:06:10.091 05:21:13 -- accel/accel.sh@20 -- # read -r var val 00:06:10.091 05:21:13 -- accel/accel.sh@21 -- # val= 00:06:10.091 05:21:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.091 05:21:13 -- accel/accel.sh@20 -- # IFS=: 00:06:10.091 05:21:13 -- accel/accel.sh@20 -- # read -r var val 00:06:10.091 05:21:13 -- accel/accel.sh@21 -- # val=0x1 00:06:10.091 05:21:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.091 05:21:13 -- accel/accel.sh@20 -- # IFS=: 00:06:10.091 05:21:13 -- accel/accel.sh@20 -- # read -r var val 00:06:10.091 05:21:13 -- accel/accel.sh@21 -- # val= 00:06:10.091 05:21:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.091 05:21:13 -- accel/accel.sh@20 -- # IFS=: 00:06:10.091 05:21:13 -- accel/accel.sh@20 -- # read -r var val 00:06:10.091 05:21:13 -- accel/accel.sh@21 -- # val= 00:06:10.091 05:21:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.091 05:21:13 -- accel/accel.sh@20 -- # IFS=: 00:06:10.091 05:21:13 -- accel/accel.sh@20 -- # read -r var val 00:06:10.091 05:21:13 -- accel/accel.sh@21 -- # val=dualcast 00:06:10.091 05:21:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.091 05:21:13 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:10.091 05:21:13 -- accel/accel.sh@20 -- # IFS=: 00:06:10.091 05:21:13 -- accel/accel.sh@20 -- # read -r var val 00:06:10.091 05:21:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:10.091 05:21:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.091 05:21:13 -- accel/accel.sh@20 -- # IFS=: 00:06:10.091 05:21:13 -- accel/accel.sh@20 -- # read -r var val 00:06:10.091 05:21:13 -- accel/accel.sh@21 -- # val= 00:06:10.091 05:21:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.091 05:21:13 -- accel/accel.sh@20 -- # IFS=: 00:06:10.091 05:21:13 -- accel/accel.sh@20 -- # read -r var val 00:06:10.091 05:21:13 -- accel/accel.sh@21 -- # val=software 00:06:10.091 05:21:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.091 05:21:13 -- accel/accel.sh@23 -- # accel_module=software 00:06:10.091 05:21:13 -- accel/accel.sh@20 -- # IFS=: 00:06:10.091 05:21:13 -- accel/accel.sh@20 -- # read -r var val 00:06:10.091 05:21:13 -- accel/accel.sh@21 -- # val=32 00:06:10.091 05:21:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.091 05:21:13 -- accel/accel.sh@20 -- # IFS=: 00:06:10.091 05:21:13 -- accel/accel.sh@20 -- # read -r var val 00:06:10.091 05:21:13 -- accel/accel.sh@21 -- # val=32 00:06:10.091 05:21:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.091 05:21:13 -- accel/accel.sh@20 -- # IFS=: 00:06:10.091 05:21:13 -- accel/accel.sh@20 -- # read -r var val 00:06:10.091 05:21:13 -- accel/accel.sh@21 -- # val=1 00:06:10.091 05:21:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.091 05:21:13 -- accel/accel.sh@20 -- # IFS=: 00:06:10.091 05:21:13 -- accel/accel.sh@20 -- # read -r var val 00:06:10.091 05:21:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:10.091 05:21:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.091 05:21:13 -- accel/accel.sh@20 -- # IFS=: 00:06:10.091 05:21:13 -- accel/accel.sh@20 -- # read -r var val 00:06:10.091 05:21:13 -- accel/accel.sh@21 -- # val=Yes 00:06:10.091 05:21:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.091 05:21:13 -- accel/accel.sh@20 -- # IFS=: 00:06:10.091 05:21:13 -- accel/accel.sh@20 -- # read -r var val 00:06:10.091 05:21:13 -- accel/accel.sh@21 -- # val= 00:06:10.091 05:21:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.091 05:21:13 -- accel/accel.sh@20 -- # IFS=: 00:06:10.091 05:21:13 -- accel/accel.sh@20 -- # read -r var val 00:06:10.091 05:21:13 -- accel/accel.sh@21 -- # val= 00:06:10.091 05:21:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.091 05:21:13 -- accel/accel.sh@20 -- # IFS=: 00:06:10.091 05:21:13 -- accel/accel.sh@20 -- # read -r var val 00:06:11.033 05:21:14 -- accel/accel.sh@21 -- # val= 00:06:11.033 05:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.033 05:21:14 -- accel/accel.sh@20 -- # IFS=: 00:06:11.033 05:21:14 -- accel/accel.sh@20 -- # read -r var val 00:06:11.293 05:21:14 -- accel/accel.sh@21 -- # val= 00:06:11.293 05:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.293 05:21:14 -- accel/accel.sh@20 -- # IFS=: 00:06:11.293 05:21:14 -- accel/accel.sh@20 -- # read -r var val 00:06:11.293 05:21:14 -- accel/accel.sh@21 -- # val= 00:06:11.293 05:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.293 05:21:14 -- accel/accel.sh@20 -- # IFS=: 00:06:11.293 05:21:14 -- accel/accel.sh@20 -- # read -r var val 00:06:11.293 05:21:14 -- accel/accel.sh@21 -- # val= 00:06:11.293 05:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.293 05:21:14 -- accel/accel.sh@20 -- # IFS=: 00:06:11.293 05:21:14 -- accel/accel.sh@20 -- # read -r var val 00:06:11.293 05:21:14 -- accel/accel.sh@21 -- # val= 00:06:11.293 05:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.293 05:21:14 -- accel/accel.sh@20 -- # IFS=: 00:06:11.293 05:21:14 -- accel/accel.sh@20 -- # read -r var val 00:06:11.293 05:21:14 -- accel/accel.sh@21 -- # val= 00:06:11.293 05:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.293 05:21:14 -- accel/accel.sh@20 -- # IFS=: 00:06:11.293 05:21:14 -- accel/accel.sh@20 -- # read -r var val 00:06:11.293 05:21:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:11.293 05:21:14 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:11.293 05:21:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.293 00:06:11.294 real 0m2.567s 00:06:11.294 user 0m2.369s 00:06:11.294 sys 0m0.202s 00:06:11.294 05:21:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:11.294 05:21:14 -- common/autotest_common.sh@10 -- # set +x 00:06:11.294 ************************************ 00:06:11.294 END TEST accel_dualcast 00:06:11.294 ************************************ 00:06:11.294 05:21:14 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:11.294 05:21:14 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:11.294 05:21:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.294 05:21:14 -- common/autotest_common.sh@10 -- # set +x 00:06:11.294 ************************************ 00:06:11.294 START TEST accel_compare 00:06:11.294 ************************************ 00:06:11.294 05:21:14 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:11.294 05:21:14 -- accel/accel.sh@16 -- # local accel_opc 00:06:11.294 05:21:14 -- accel/accel.sh@17 -- # local accel_module 00:06:11.294 05:21:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:11.294 05:21:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:11.294 05:21:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:11.294 05:21:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:11.294 05:21:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.294 05:21:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.294 05:21:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:11.294 05:21:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:11.294 05:21:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:11.294 05:21:14 -- accel/accel.sh@42 -- # jq -r . 00:06:11.294 [2024-12-07 05:21:14.348504] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:11.294 [2024-12-07 05:21:14.348577] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1618987 ] 00:06:11.294 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.294 [2024-12-07 05:21:14.410933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.294 [2024-12-07 05:21:14.475223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.677 05:21:15 -- accel/accel.sh@18 -- # out=' 00:06:12.677 SPDK Configuration: 00:06:12.677 Core mask: 0x1 00:06:12.677 00:06:12.677 Accel Perf Configuration: 00:06:12.677 Workload Type: compare 00:06:12.677 Transfer size: 4096 bytes 00:06:12.677 Vector count 1 00:06:12.677 Module: software 00:06:12.677 Queue depth: 32 00:06:12.677 Allocate depth: 32 00:06:12.677 # threads/core: 1 00:06:12.677 Run time: 1 seconds 00:06:12.677 Verify: Yes 00:06:12.677 00:06:12.677 Running for 1 seconds... 00:06:12.677 00:06:12.677 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:12.677 ------------------------------------------------------------------------------------ 00:06:12.677 0,0 437184/s 1707 MiB/s 0 0 00:06:12.677 ==================================================================================== 00:06:12.677 Total 437184/s 1707 MiB/s 0 0' 00:06:12.677 05:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.677 05:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.677 05:21:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:12.677 05:21:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:12.677 05:21:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.677 05:21:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.677 05:21:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.677 05:21:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.677 05:21:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.677 05:21:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.677 05:21:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.677 05:21:15 -- accel/accel.sh@42 -- # jq -r . 00:06:12.677 [2024-12-07 05:21:15.627495] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:12.677 [2024-12-07 05:21:15.627603] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1619216 ] 00:06:12.677 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.677 [2024-12-07 05:21:15.690773] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.677 [2024-12-07 05:21:15.753257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.677 05:21:15 -- accel/accel.sh@21 -- # val= 00:06:12.677 05:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.677 05:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.677 05:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.677 05:21:15 -- accel/accel.sh@21 -- # val= 00:06:12.677 05:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.677 05:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.677 05:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.677 05:21:15 -- accel/accel.sh@21 -- # val=0x1 00:06:12.677 05:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.677 05:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.677 05:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.677 05:21:15 -- accel/accel.sh@21 -- # val= 00:06:12.677 05:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.677 05:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.677 05:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.677 05:21:15 -- accel/accel.sh@21 -- # val= 00:06:12.677 05:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.677 05:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.677 05:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.677 05:21:15 -- accel/accel.sh@21 -- # val=compare 00:06:12.677 05:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.677 05:21:15 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:12.677 05:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.677 05:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.677 05:21:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:12.677 05:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.677 05:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.677 05:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.677 05:21:15 -- accel/accel.sh@21 -- # val= 00:06:12.677 05:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.677 05:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.677 05:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.677 05:21:15 -- accel/accel.sh@21 -- # val=software 00:06:12.677 05:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.677 05:21:15 -- accel/accel.sh@23 -- # accel_module=software 00:06:12.677 05:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.677 05:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.677 05:21:15 -- accel/accel.sh@21 -- # val=32 00:06:12.677 05:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.677 05:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.677 05:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.677 05:21:15 -- accel/accel.sh@21 -- # val=32 00:06:12.677 05:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.677 05:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.677 05:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.677 05:21:15 -- accel/accel.sh@21 -- # val=1 00:06:12.677 05:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.677 05:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.677 05:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.677 05:21:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:12.677 05:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.677 05:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.677 05:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.677 05:21:15 -- accel/accel.sh@21 -- # val=Yes 00:06:12.677 05:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.677 05:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.677 05:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.677 05:21:15 -- accel/accel.sh@21 -- # val= 00:06:12.677 05:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.677 05:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.677 05:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.677 05:21:15 -- accel/accel.sh@21 -- # val= 00:06:12.677 05:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.677 05:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.677 05:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:14.060 05:21:16 -- accel/accel.sh@21 -- # val= 00:06:14.060 05:21:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.060 05:21:16 -- accel/accel.sh@20 -- # IFS=: 00:06:14.060 05:21:16 -- accel/accel.sh@20 -- # read -r var val 00:06:14.060 05:21:16 -- accel/accel.sh@21 -- # val= 00:06:14.060 05:21:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.060 05:21:16 -- accel/accel.sh@20 -- # IFS=: 00:06:14.060 05:21:16 -- accel/accel.sh@20 -- # read -r var val 00:06:14.060 05:21:16 -- accel/accel.sh@21 -- # val= 00:06:14.060 05:21:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.060 05:21:16 -- accel/accel.sh@20 -- # IFS=: 00:06:14.060 05:21:16 -- accel/accel.sh@20 -- # read -r var val 00:06:14.060 05:21:16 -- accel/accel.sh@21 -- # val= 00:06:14.060 05:21:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.060 05:21:16 -- accel/accel.sh@20 -- # IFS=: 00:06:14.060 05:21:16 -- accel/accel.sh@20 -- # read -r var val 00:06:14.060 05:21:16 -- accel/accel.sh@21 -- # val= 00:06:14.060 05:21:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.060 05:21:16 -- accel/accel.sh@20 -- # IFS=: 00:06:14.060 05:21:16 -- accel/accel.sh@20 -- # read -r var val 00:06:14.060 05:21:16 -- accel/accel.sh@21 -- # val= 00:06:14.060 05:21:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.060 05:21:16 -- accel/accel.sh@20 -- # IFS=: 00:06:14.060 05:21:16 -- accel/accel.sh@20 -- # read -r var val 00:06:14.060 05:21:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:14.060 05:21:16 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:14.060 05:21:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.060 00:06:14.060 real 0m2.562s 00:06:14.060 user 0m2.367s 00:06:14.060 sys 0m0.202s 00:06:14.060 05:21:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:14.060 05:21:16 -- common/autotest_common.sh@10 -- # set +x 00:06:14.060 ************************************ 00:06:14.060 END TEST accel_compare 00:06:14.060 ************************************ 00:06:14.060 05:21:16 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:14.060 05:21:16 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:14.060 05:21:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.061 05:21:16 -- common/autotest_common.sh@10 -- # set +x 00:06:14.061 ************************************ 00:06:14.061 START TEST accel_xor 00:06:14.061 ************************************ 00:06:14.061 05:21:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:06:14.061 05:21:16 -- accel/accel.sh@16 -- # local accel_opc 00:06:14.061 05:21:16 -- accel/accel.sh@17 -- # local accel_module 00:06:14.061 05:21:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:14.061 05:21:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:14.061 05:21:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.061 05:21:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.061 05:21:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.061 05:21:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.061 05:21:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.061 05:21:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.061 05:21:16 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.061 05:21:16 -- accel/accel.sh@42 -- # jq -r . 00:06:14.061 [2024-12-07 05:21:16.956658] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:14.061 [2024-12-07 05:21:16.956786] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1619389 ] 00:06:14.061 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.061 [2024-12-07 05:21:17.029904] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.061 [2024-12-07 05:21:17.096542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.999 05:21:18 -- accel/accel.sh@18 -- # out=' 00:06:14.999 SPDK Configuration: 00:06:14.999 Core mask: 0x1 00:06:14.999 00:06:14.999 Accel Perf Configuration: 00:06:14.999 Workload Type: xor 00:06:14.999 Source buffers: 2 00:06:14.999 Transfer size: 4096 bytes 00:06:14.999 Vector count 1 00:06:14.999 Module: software 00:06:14.999 Queue depth: 32 00:06:14.999 Allocate depth: 32 00:06:14.999 # threads/core: 1 00:06:14.999 Run time: 1 seconds 00:06:14.999 Verify: Yes 00:06:14.999 00:06:14.999 Running for 1 seconds... 00:06:14.999 00:06:14.999 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:14.999 ------------------------------------------------------------------------------------ 00:06:14.999 0,0 360672/s 1408 MiB/s 0 0 00:06:14.999 ==================================================================================== 00:06:14.999 Total 360672/s 1408 MiB/s 0 0' 00:06:14.999 05:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:14.999 05:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:14.999 05:21:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:14.999 05:21:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:14.999 05:21:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.999 05:21:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.999 05:21:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.999 05:21:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.999 05:21:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.999 05:21:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.999 05:21:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.999 05:21:18 -- accel/accel.sh@42 -- # jq -r . 00:06:15.259 [2024-12-07 05:21:18.248931] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:15.259 [2024-12-07 05:21:18.249004] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1619699 ] 00:06:15.259 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.259 [2024-12-07 05:21:18.312097] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.259 [2024-12-07 05:21:18.374167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.259 05:21:18 -- accel/accel.sh@21 -- # val= 00:06:15.259 05:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.259 05:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.259 05:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.259 05:21:18 -- accel/accel.sh@21 -- # val= 00:06:15.259 05:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.260 05:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.260 05:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.260 05:21:18 -- accel/accel.sh@21 -- # val=0x1 00:06:15.260 05:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.260 05:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.260 05:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.260 05:21:18 -- accel/accel.sh@21 -- # val= 00:06:15.260 05:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.260 05:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.260 05:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.260 05:21:18 -- accel/accel.sh@21 -- # val= 00:06:15.260 05:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.260 05:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.260 05:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.260 05:21:18 -- accel/accel.sh@21 -- # val=xor 00:06:15.260 05:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.260 05:21:18 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:15.260 05:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.260 05:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.260 05:21:18 -- accel/accel.sh@21 -- # val=2 00:06:15.260 05:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.260 05:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.260 05:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.260 05:21:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:15.260 05:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.260 05:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.260 05:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.260 05:21:18 -- accel/accel.sh@21 -- # val= 00:06:15.260 05:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.260 05:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.260 05:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.260 05:21:18 -- accel/accel.sh@21 -- # val=software 00:06:15.260 05:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.260 05:21:18 -- accel/accel.sh@23 -- # accel_module=software 00:06:15.260 05:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.260 05:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.260 05:21:18 -- accel/accel.sh@21 -- # val=32 00:06:15.260 05:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.260 05:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.260 05:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.260 05:21:18 -- accel/accel.sh@21 -- # val=32 00:06:15.260 05:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.260 05:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.260 05:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.260 05:21:18 -- accel/accel.sh@21 -- # val=1 00:06:15.260 05:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.260 05:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.260 05:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.260 05:21:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:15.260 05:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.260 05:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.260 05:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.260 05:21:18 -- accel/accel.sh@21 -- # val=Yes 00:06:15.260 05:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.260 05:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.260 05:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.260 05:21:18 -- accel/accel.sh@21 -- # val= 00:06:15.260 05:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.260 05:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.260 05:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.260 05:21:18 -- accel/accel.sh@21 -- # val= 00:06:15.260 05:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.260 05:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.260 05:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:16.654 05:21:19 -- accel/accel.sh@21 -- # val= 00:06:16.654 05:21:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.654 05:21:19 -- accel/accel.sh@20 -- # IFS=: 00:06:16.654 05:21:19 -- accel/accel.sh@20 -- # read -r var val 00:06:16.654 05:21:19 -- accel/accel.sh@21 -- # val= 00:06:16.654 05:21:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.654 05:21:19 -- accel/accel.sh@20 -- # IFS=: 00:06:16.654 05:21:19 -- accel/accel.sh@20 -- # read -r var val 00:06:16.654 05:21:19 -- accel/accel.sh@21 -- # val= 00:06:16.654 05:21:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.654 05:21:19 -- accel/accel.sh@20 -- # IFS=: 00:06:16.654 05:21:19 -- accel/accel.sh@20 -- # read -r var val 00:06:16.654 05:21:19 -- accel/accel.sh@21 -- # val= 00:06:16.654 05:21:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.654 05:21:19 -- accel/accel.sh@20 -- # IFS=: 00:06:16.654 05:21:19 -- accel/accel.sh@20 -- # read -r var val 00:06:16.654 05:21:19 -- accel/accel.sh@21 -- # val= 00:06:16.654 05:21:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.654 05:21:19 -- accel/accel.sh@20 -- # IFS=: 00:06:16.654 05:21:19 -- accel/accel.sh@20 -- # read -r var val 00:06:16.654 05:21:19 -- accel/accel.sh@21 -- # val= 00:06:16.654 05:21:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.654 05:21:19 -- accel/accel.sh@20 -- # IFS=: 00:06:16.654 05:21:19 -- accel/accel.sh@20 -- # read -r var val 00:06:16.654 05:21:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:16.654 05:21:19 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:16.654 05:21:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.654 00:06:16.654 real 0m2.577s 00:06:16.654 user 0m2.372s 00:06:16.654 sys 0m0.212s 00:06:16.654 05:21:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:16.654 05:21:19 -- common/autotest_common.sh@10 -- # set +x 00:06:16.654 ************************************ 00:06:16.654 END TEST accel_xor 00:06:16.654 ************************************ 00:06:16.654 05:21:19 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:16.654 05:21:19 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:16.654 05:21:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.654 05:21:19 -- common/autotest_common.sh@10 -- # set +x 00:06:16.654 ************************************ 00:06:16.654 START TEST accel_xor 00:06:16.654 ************************************ 00:06:16.654 05:21:19 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:06:16.654 05:21:19 -- accel/accel.sh@16 -- # local accel_opc 00:06:16.654 05:21:19 -- accel/accel.sh@17 -- # local accel_module 00:06:16.654 05:21:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:16.654 05:21:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:16.654 05:21:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.654 05:21:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:16.654 05:21:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.654 05:21:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.654 05:21:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:16.654 05:21:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:16.654 05:21:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:16.654 05:21:19 -- accel/accel.sh@42 -- # jq -r . 00:06:16.654 [2024-12-07 05:21:19.574720] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:16.654 [2024-12-07 05:21:19.574823] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1620049 ] 00:06:16.654 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.654 [2024-12-07 05:21:19.637198] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.654 [2024-12-07 05:21:19.699969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.684 05:21:20 -- accel/accel.sh@18 -- # out=' 00:06:17.684 SPDK Configuration: 00:06:17.684 Core mask: 0x1 00:06:17.684 00:06:17.684 Accel Perf Configuration: 00:06:17.684 Workload Type: xor 00:06:17.684 Source buffers: 3 00:06:17.684 Transfer size: 4096 bytes 00:06:17.684 Vector count 1 00:06:17.684 Module: software 00:06:17.684 Queue depth: 32 00:06:17.684 Allocate depth: 32 00:06:17.684 # threads/core: 1 00:06:17.684 Run time: 1 seconds 00:06:17.684 Verify: Yes 00:06:17.684 00:06:17.684 Running for 1 seconds... 00:06:17.684 00:06:17.684 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:17.684 ------------------------------------------------------------------------------------ 00:06:17.684 0,0 344032/s 1343 MiB/s 0 0 00:06:17.684 ==================================================================================== 00:06:17.684 Total 344032/s 1343 MiB/s 0 0' 00:06:17.684 05:21:20 -- accel/accel.sh@20 -- # IFS=: 00:06:17.684 05:21:20 -- accel/accel.sh@20 -- # read -r var val 00:06:17.684 05:21:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:17.684 05:21:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:17.684 05:21:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.684 05:21:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.684 05:21:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.684 05:21:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.684 05:21:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.684 05:21:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.684 05:21:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.684 05:21:20 -- accel/accel.sh@42 -- # jq -r . 00:06:17.684 [2024-12-07 05:21:20.853330] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:17.684 [2024-12-07 05:21:20.853422] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1620387 ] 00:06:17.684 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.684 [2024-12-07 05:21:20.916600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.946 [2024-12-07 05:21:20.978670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.946 05:21:21 -- accel/accel.sh@21 -- # val= 00:06:17.946 05:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.946 05:21:21 -- accel/accel.sh@20 -- # IFS=: 00:06:17.946 05:21:21 -- accel/accel.sh@20 -- # read -r var val 00:06:17.946 05:21:21 -- accel/accel.sh@21 -- # val= 00:06:17.946 05:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.946 05:21:21 -- accel/accel.sh@20 -- # IFS=: 00:06:17.946 05:21:21 -- accel/accel.sh@20 -- # read -r var val 00:06:17.946 05:21:21 -- accel/accel.sh@21 -- # val=0x1 00:06:17.946 05:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.946 05:21:21 -- accel/accel.sh@20 -- # IFS=: 00:06:17.946 05:21:21 -- accel/accel.sh@20 -- # read -r var val 00:06:17.946 05:21:21 -- accel/accel.sh@21 -- # val= 00:06:17.946 05:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.946 05:21:21 -- accel/accel.sh@20 -- # IFS=: 00:06:17.946 05:21:21 -- accel/accel.sh@20 -- # read -r var val 00:06:17.946 05:21:21 -- accel/accel.sh@21 -- # val= 00:06:17.946 05:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.946 05:21:21 -- accel/accel.sh@20 -- # IFS=: 00:06:17.946 05:21:21 -- accel/accel.sh@20 -- # read -r var val 00:06:17.946 05:21:21 -- accel/accel.sh@21 -- # val=xor 00:06:17.946 05:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.946 05:21:21 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:17.946 05:21:21 -- accel/accel.sh@20 -- # IFS=: 00:06:17.946 05:21:21 -- accel/accel.sh@20 -- # read -r var val 00:06:17.946 05:21:21 -- accel/accel.sh@21 -- # val=3 00:06:17.946 05:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.946 05:21:21 -- accel/accel.sh@20 -- # IFS=: 00:06:17.946 05:21:21 -- accel/accel.sh@20 -- # read -r var val 00:06:17.946 05:21:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:17.946 05:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.946 05:21:21 -- accel/accel.sh@20 -- # IFS=: 00:06:17.946 05:21:21 -- accel/accel.sh@20 -- # read -r var val 00:06:17.946 05:21:21 -- accel/accel.sh@21 -- # val= 00:06:17.946 05:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.946 05:21:21 -- accel/accel.sh@20 -- # IFS=: 00:06:17.946 05:21:21 -- accel/accel.sh@20 -- # read -r var val 00:06:17.946 05:21:21 -- accel/accel.sh@21 -- # val=software 00:06:17.946 05:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.946 05:21:21 -- accel/accel.sh@23 -- # accel_module=software 00:06:17.946 05:21:21 -- accel/accel.sh@20 -- # IFS=: 00:06:17.946 05:21:21 -- accel/accel.sh@20 -- # read -r var val 00:06:17.946 05:21:21 -- accel/accel.sh@21 -- # val=32 00:06:17.946 05:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.946 05:21:21 -- accel/accel.sh@20 -- # IFS=: 00:06:17.946 05:21:21 -- accel/accel.sh@20 -- # read -r var val 00:06:17.946 05:21:21 -- accel/accel.sh@21 -- # val=32 00:06:17.946 05:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.946 05:21:21 -- accel/accel.sh@20 -- # IFS=: 00:06:17.946 05:21:21 -- accel/accel.sh@20 -- # read -r var val 00:06:17.946 05:21:21 -- accel/accel.sh@21 -- # val=1 00:06:17.946 05:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.946 05:21:21 -- accel/accel.sh@20 -- # IFS=: 00:06:17.946 05:21:21 -- accel/accel.sh@20 -- # read -r var val 00:06:17.946 05:21:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:17.946 05:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.946 05:21:21 -- accel/accel.sh@20 -- # IFS=: 00:06:17.946 05:21:21 -- accel/accel.sh@20 -- # read -r var val 00:06:17.946 05:21:21 -- accel/accel.sh@21 -- # val=Yes 00:06:17.946 05:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.946 05:21:21 -- accel/accel.sh@20 -- # IFS=: 00:06:17.946 05:21:21 -- accel/accel.sh@20 -- # read -r var val 00:06:17.946 05:21:21 -- accel/accel.sh@21 -- # val= 00:06:17.946 05:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.946 05:21:21 -- accel/accel.sh@20 -- # IFS=: 00:06:17.946 05:21:21 -- accel/accel.sh@20 -- # read -r var val 00:06:17.946 05:21:21 -- accel/accel.sh@21 -- # val= 00:06:17.946 05:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.946 05:21:21 -- accel/accel.sh@20 -- # IFS=: 00:06:17.946 05:21:21 -- accel/accel.sh@20 -- # read -r var val 00:06:18.889 05:21:22 -- accel/accel.sh@21 -- # val= 00:06:18.889 05:21:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.889 05:21:22 -- accel/accel.sh@20 -- # IFS=: 00:06:18.889 05:21:22 -- accel/accel.sh@20 -- # read -r var val 00:06:18.889 05:21:22 -- accel/accel.sh@21 -- # val= 00:06:18.889 05:21:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.889 05:21:22 -- accel/accel.sh@20 -- # IFS=: 00:06:18.889 05:21:22 -- accel/accel.sh@20 -- # read -r var val 00:06:18.889 05:21:22 -- accel/accel.sh@21 -- # val= 00:06:18.889 05:21:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.889 05:21:22 -- accel/accel.sh@20 -- # IFS=: 00:06:18.889 05:21:22 -- accel/accel.sh@20 -- # read -r var val 00:06:18.889 05:21:22 -- accel/accel.sh@21 -- # val= 00:06:18.889 05:21:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.889 05:21:22 -- accel/accel.sh@20 -- # IFS=: 00:06:18.889 05:21:22 -- accel/accel.sh@20 -- # read -r var val 00:06:18.889 05:21:22 -- accel/accel.sh@21 -- # val= 00:06:18.889 05:21:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.889 05:21:22 -- accel/accel.sh@20 -- # IFS=: 00:06:18.889 05:21:22 -- accel/accel.sh@20 -- # read -r var val 00:06:18.889 05:21:22 -- accel/accel.sh@21 -- # val= 00:06:18.889 05:21:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.889 05:21:22 -- accel/accel.sh@20 -- # IFS=: 00:06:18.889 05:21:22 -- accel/accel.sh@20 -- # read -r var val 00:06:18.889 05:21:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:18.889 05:21:22 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:18.889 05:21:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.889 00:06:18.889 real 0m2.562s 00:06:18.889 user 0m2.373s 00:06:18.889 sys 0m0.195s 00:06:18.889 05:21:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:18.889 05:21:22 -- common/autotest_common.sh@10 -- # set +x 00:06:18.889 ************************************ 00:06:18.889 END TEST accel_xor 00:06:18.889 ************************************ 00:06:19.149 05:21:22 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:19.149 05:21:22 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:19.149 05:21:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.149 05:21:22 -- common/autotest_common.sh@10 -- # set +x 00:06:19.149 ************************************ 00:06:19.149 START TEST accel_dif_verify 00:06:19.149 ************************************ 00:06:19.149 05:21:22 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:06:19.149 05:21:22 -- accel/accel.sh@16 -- # local accel_opc 00:06:19.149 05:21:22 -- accel/accel.sh@17 -- # local accel_module 00:06:19.149 05:21:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:19.149 05:21:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:19.149 05:21:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.149 05:21:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.149 05:21:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.149 05:21:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.149 05:21:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.149 05:21:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.149 05:21:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.149 05:21:22 -- accel/accel.sh@42 -- # jq -r . 00:06:19.149 [2024-12-07 05:21:22.178611] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:19.149 [2024-12-07 05:21:22.178688] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1620560 ] 00:06:19.149 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.149 [2024-12-07 05:21:22.243997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.149 [2024-12-07 05:21:22.311258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.530 05:21:23 -- accel/accel.sh@18 -- # out=' 00:06:20.530 SPDK Configuration: 00:06:20.530 Core mask: 0x1 00:06:20.530 00:06:20.530 Accel Perf Configuration: 00:06:20.530 Workload Type: dif_verify 00:06:20.530 Vector size: 4096 bytes 00:06:20.530 Transfer size: 4096 bytes 00:06:20.530 Block size: 512 bytes 00:06:20.530 Metadata size: 8 bytes 00:06:20.530 Vector count 1 00:06:20.530 Module: software 00:06:20.530 Queue depth: 32 00:06:20.530 Allocate depth: 32 00:06:20.530 # threads/core: 1 00:06:20.530 Run time: 1 seconds 00:06:20.530 Verify: No 00:06:20.530 00:06:20.530 Running for 1 seconds... 00:06:20.530 00:06:20.530 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:20.530 ------------------------------------------------------------------------------------ 00:06:20.530 0,0 94816/s 376 MiB/s 0 0 00:06:20.530 ==================================================================================== 00:06:20.530 Total 94816/s 370 MiB/s 0 0' 00:06:20.530 05:21:23 -- accel/accel.sh@20 -- # IFS=: 00:06:20.530 05:21:23 -- accel/accel.sh@20 -- # read -r var val 00:06:20.530 05:21:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:20.530 05:21:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:20.530 05:21:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.530 05:21:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.530 05:21:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.530 05:21:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.530 05:21:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.530 05:21:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.530 05:21:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.530 05:21:23 -- accel/accel.sh@42 -- # jq -r . 00:06:20.530 [2024-12-07 05:21:23.463855] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:20.530 [2024-12-07 05:21:23.463928] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1620761 ] 00:06:20.530 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.530 [2024-12-07 05:21:23.525474] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.530 [2024-12-07 05:21:23.587907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.530 05:21:23 -- accel/accel.sh@21 -- # val= 00:06:20.530 05:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.530 05:21:23 -- accel/accel.sh@20 -- # IFS=: 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # read -r var val 00:06:20.531 05:21:23 -- accel/accel.sh@21 -- # val= 00:06:20.531 05:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # IFS=: 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # read -r var val 00:06:20.531 05:21:23 -- accel/accel.sh@21 -- # val=0x1 00:06:20.531 05:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # IFS=: 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # read -r var val 00:06:20.531 05:21:23 -- accel/accel.sh@21 -- # val= 00:06:20.531 05:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # IFS=: 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # read -r var val 00:06:20.531 05:21:23 -- accel/accel.sh@21 -- # val= 00:06:20.531 05:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # IFS=: 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # read -r var val 00:06:20.531 05:21:23 -- accel/accel.sh@21 -- # val=dif_verify 00:06:20.531 05:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.531 05:21:23 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # IFS=: 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # read -r var val 00:06:20.531 05:21:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:20.531 05:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # IFS=: 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # read -r var val 00:06:20.531 05:21:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:20.531 05:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # IFS=: 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # read -r var val 00:06:20.531 05:21:23 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:20.531 05:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # IFS=: 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # read -r var val 00:06:20.531 05:21:23 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:20.531 05:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # IFS=: 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # read -r var val 00:06:20.531 05:21:23 -- accel/accel.sh@21 -- # val= 00:06:20.531 05:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # IFS=: 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # read -r var val 00:06:20.531 05:21:23 -- accel/accel.sh@21 -- # val=software 00:06:20.531 05:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.531 05:21:23 -- accel/accel.sh@23 -- # accel_module=software 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # IFS=: 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # read -r var val 00:06:20.531 05:21:23 -- accel/accel.sh@21 -- # val=32 00:06:20.531 05:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # IFS=: 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # read -r var val 00:06:20.531 05:21:23 -- accel/accel.sh@21 -- # val=32 00:06:20.531 05:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # IFS=: 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # read -r var val 00:06:20.531 05:21:23 -- accel/accel.sh@21 -- # val=1 00:06:20.531 05:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # IFS=: 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # read -r var val 00:06:20.531 05:21:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:20.531 05:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # IFS=: 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # read -r var val 00:06:20.531 05:21:23 -- accel/accel.sh@21 -- # val=No 00:06:20.531 05:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # IFS=: 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # read -r var val 00:06:20.531 05:21:23 -- accel/accel.sh@21 -- # val= 00:06:20.531 05:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # IFS=: 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # read -r var val 00:06:20.531 05:21:23 -- accel/accel.sh@21 -- # val= 00:06:20.531 05:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # IFS=: 00:06:20.531 05:21:23 -- accel/accel.sh@20 -- # read -r var val 00:06:21.914 05:21:24 -- accel/accel.sh@21 -- # val= 00:06:21.914 05:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.914 05:21:24 -- accel/accel.sh@20 -- # IFS=: 00:06:21.914 05:21:24 -- accel/accel.sh@20 -- # read -r var val 00:06:21.914 05:21:24 -- accel/accel.sh@21 -- # val= 00:06:21.914 05:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.914 05:21:24 -- accel/accel.sh@20 -- # IFS=: 00:06:21.914 05:21:24 -- accel/accel.sh@20 -- # read -r var val 00:06:21.914 05:21:24 -- accel/accel.sh@21 -- # val= 00:06:21.914 05:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.914 05:21:24 -- accel/accel.sh@20 -- # IFS=: 00:06:21.914 05:21:24 -- accel/accel.sh@20 -- # read -r var val 00:06:21.914 05:21:24 -- accel/accel.sh@21 -- # val= 00:06:21.914 05:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.914 05:21:24 -- accel/accel.sh@20 -- # IFS=: 00:06:21.914 05:21:24 -- accel/accel.sh@20 -- # read -r var val 00:06:21.914 05:21:24 -- accel/accel.sh@21 -- # val= 00:06:21.914 05:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.914 05:21:24 -- accel/accel.sh@20 -- # IFS=: 00:06:21.914 05:21:24 -- accel/accel.sh@20 -- # read -r var val 00:06:21.914 05:21:24 -- accel/accel.sh@21 -- # val= 00:06:21.914 05:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.914 05:21:24 -- accel/accel.sh@20 -- # IFS=: 00:06:21.914 05:21:24 -- accel/accel.sh@20 -- # read -r var val 00:06:21.914 05:21:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:21.914 05:21:24 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:21.914 05:21:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.914 00:06:21.914 real 0m2.567s 00:06:21.914 user 0m2.379s 00:06:21.914 sys 0m0.195s 00:06:21.914 05:21:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:21.914 05:21:24 -- common/autotest_common.sh@10 -- # set +x 00:06:21.914 ************************************ 00:06:21.914 END TEST accel_dif_verify 00:06:21.914 ************************************ 00:06:21.914 05:21:24 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:21.914 05:21:24 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:21.914 05:21:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.914 05:21:24 -- common/autotest_common.sh@10 -- # set +x 00:06:21.914 ************************************ 00:06:21.914 START TEST accel_dif_generate 00:06:21.914 ************************************ 00:06:21.914 05:21:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:06:21.914 05:21:24 -- accel/accel.sh@16 -- # local accel_opc 00:06:21.914 05:21:24 -- accel/accel.sh@17 -- # local accel_module 00:06:21.914 05:21:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:21.915 05:21:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:21.915 05:21:24 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.915 05:21:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:21.915 05:21:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.915 05:21:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.915 05:21:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:21.915 05:21:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:21.915 05:21:24 -- accel/accel.sh@41 -- # local IFS=, 00:06:21.915 05:21:24 -- accel/accel.sh@42 -- # jq -r . 00:06:21.915 [2024-12-07 05:21:24.790752] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:21.915 [2024-12-07 05:21:24.790852] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1621114 ] 00:06:21.915 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.915 [2024-12-07 05:21:24.854266] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.915 [2024-12-07 05:21:24.916485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.855 05:21:26 -- accel/accel.sh@18 -- # out=' 00:06:22.855 SPDK Configuration: 00:06:22.855 Core mask: 0x1 00:06:22.855 00:06:22.855 Accel Perf Configuration: 00:06:22.855 Workload Type: dif_generate 00:06:22.855 Vector size: 4096 bytes 00:06:22.856 Transfer size: 4096 bytes 00:06:22.856 Block size: 512 bytes 00:06:22.856 Metadata size: 8 bytes 00:06:22.856 Vector count 1 00:06:22.856 Module: software 00:06:22.856 Queue depth: 32 00:06:22.856 Allocate depth: 32 00:06:22.856 # threads/core: 1 00:06:22.856 Run time: 1 seconds 00:06:22.856 Verify: No 00:06:22.856 00:06:22.856 Running for 1 seconds... 00:06:22.856 00:06:22.856 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:22.856 ------------------------------------------------------------------------------------ 00:06:22.856 0,0 114496/s 454 MiB/s 0 0 00:06:22.856 ==================================================================================== 00:06:22.856 Total 114496/s 447 MiB/s 0 0' 00:06:22.856 05:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:22.856 05:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:22.856 05:21:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:22.856 05:21:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:22.856 05:21:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.856 05:21:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.856 05:21:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.856 05:21:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.856 05:21:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.856 05:21:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.856 05:21:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.856 05:21:26 -- accel/accel.sh@42 -- # jq -r . 00:06:22.856 [2024-12-07 05:21:26.069862] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:22.856 [2024-12-07 05:21:26.069965] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1621455 ] 00:06:23.116 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.116 [2024-12-07 05:21:26.132817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.116 [2024-12-07 05:21:26.194665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.116 05:21:26 -- accel/accel.sh@21 -- # val= 00:06:23.116 05:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.116 05:21:26 -- accel/accel.sh@21 -- # val= 00:06:23.116 05:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.116 05:21:26 -- accel/accel.sh@21 -- # val=0x1 00:06:23.116 05:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.116 05:21:26 -- accel/accel.sh@21 -- # val= 00:06:23.116 05:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.116 05:21:26 -- accel/accel.sh@21 -- # val= 00:06:23.116 05:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.116 05:21:26 -- accel/accel.sh@21 -- # val=dif_generate 00:06:23.116 05:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.116 05:21:26 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.116 05:21:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:23.116 05:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.116 05:21:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:23.116 05:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.116 05:21:26 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:23.116 05:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.116 05:21:26 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:23.116 05:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.116 05:21:26 -- accel/accel.sh@21 -- # val= 00:06:23.116 05:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.116 05:21:26 -- accel/accel.sh@21 -- # val=software 00:06:23.116 05:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.116 05:21:26 -- accel/accel.sh@23 -- # accel_module=software 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.116 05:21:26 -- accel/accel.sh@21 -- # val=32 00:06:23.116 05:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.116 05:21:26 -- accel/accel.sh@21 -- # val=32 00:06:23.116 05:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.116 05:21:26 -- accel/accel.sh@21 -- # val=1 00:06:23.116 05:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.116 05:21:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:23.116 05:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.116 05:21:26 -- accel/accel.sh@21 -- # val=No 00:06:23.116 05:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.116 05:21:26 -- accel/accel.sh@21 -- # val= 00:06:23.116 05:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.116 05:21:26 -- accel/accel.sh@21 -- # val= 00:06:23.116 05:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.116 05:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:24.499 05:21:27 -- accel/accel.sh@21 -- # val= 00:06:24.499 05:21:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.499 05:21:27 -- accel/accel.sh@20 -- # IFS=: 00:06:24.499 05:21:27 -- accel/accel.sh@20 -- # read -r var val 00:06:24.499 05:21:27 -- accel/accel.sh@21 -- # val= 00:06:24.499 05:21:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.499 05:21:27 -- accel/accel.sh@20 -- # IFS=: 00:06:24.499 05:21:27 -- accel/accel.sh@20 -- # read -r var val 00:06:24.499 05:21:27 -- accel/accel.sh@21 -- # val= 00:06:24.499 05:21:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.499 05:21:27 -- accel/accel.sh@20 -- # IFS=: 00:06:24.499 05:21:27 -- accel/accel.sh@20 -- # read -r var val 00:06:24.499 05:21:27 -- accel/accel.sh@21 -- # val= 00:06:24.499 05:21:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.499 05:21:27 -- accel/accel.sh@20 -- # IFS=: 00:06:24.499 05:21:27 -- accel/accel.sh@20 -- # read -r var val 00:06:24.499 05:21:27 -- accel/accel.sh@21 -- # val= 00:06:24.499 05:21:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.499 05:21:27 -- accel/accel.sh@20 -- # IFS=: 00:06:24.499 05:21:27 -- accel/accel.sh@20 -- # read -r var val 00:06:24.499 05:21:27 -- accel/accel.sh@21 -- # val= 00:06:24.499 05:21:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.499 05:21:27 -- accel/accel.sh@20 -- # IFS=: 00:06:24.499 05:21:27 -- accel/accel.sh@20 -- # read -r var val 00:06:24.499 05:21:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:24.499 05:21:27 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:24.499 05:21:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.499 00:06:24.499 real 0m2.563s 00:06:24.499 user 0m2.368s 00:06:24.499 sys 0m0.201s 00:06:24.499 05:21:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:24.499 05:21:27 -- common/autotest_common.sh@10 -- # set +x 00:06:24.499 ************************************ 00:06:24.499 END TEST accel_dif_generate 00:06:24.499 ************************************ 00:06:24.499 05:21:27 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:24.499 05:21:27 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:24.499 05:21:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.499 05:21:27 -- common/autotest_common.sh@10 -- # set +x 00:06:24.499 ************************************ 00:06:24.499 START TEST accel_dif_generate_copy 00:06:24.499 ************************************ 00:06:24.499 05:21:27 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:06:24.499 05:21:27 -- accel/accel.sh@16 -- # local accel_opc 00:06:24.499 05:21:27 -- accel/accel.sh@17 -- # local accel_module 00:06:24.499 05:21:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:24.499 05:21:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:24.499 05:21:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.499 05:21:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.499 05:21:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.499 05:21:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.499 05:21:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.499 05:21:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.500 05:21:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.500 05:21:27 -- accel/accel.sh@42 -- # jq -r . 00:06:24.500 [2024-12-07 05:21:27.395256] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:24.500 [2024-12-07 05:21:27.395333] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1621686 ] 00:06:24.500 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.500 [2024-12-07 05:21:27.459233] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.500 [2024-12-07 05:21:27.525303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.438 05:21:28 -- accel/accel.sh@18 -- # out=' 00:06:25.438 SPDK Configuration: 00:06:25.438 Core mask: 0x1 00:06:25.438 00:06:25.438 Accel Perf Configuration: 00:06:25.438 Workload Type: dif_generate_copy 00:06:25.438 Vector size: 4096 bytes 00:06:25.438 Transfer size: 4096 bytes 00:06:25.438 Vector count 1 00:06:25.438 Module: software 00:06:25.438 Queue depth: 32 00:06:25.438 Allocate depth: 32 00:06:25.438 # threads/core: 1 00:06:25.438 Run time: 1 seconds 00:06:25.438 Verify: No 00:06:25.438 00:06:25.438 Running for 1 seconds... 00:06:25.438 00:06:25.438 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:25.438 ------------------------------------------------------------------------------------ 00:06:25.438 0,0 87616/s 347 MiB/s 0 0 00:06:25.438 ==================================================================================== 00:06:25.438 Total 87616/s 342 MiB/s 0 0' 00:06:25.438 05:21:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.438 05:21:28 -- accel/accel.sh@20 -- # read -r var val 00:06:25.438 05:21:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:25.438 05:21:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:25.438 05:21:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.438 05:21:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.438 05:21:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.438 05:21:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.438 05:21:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.438 05:21:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.438 05:21:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.438 05:21:28 -- accel/accel.sh@42 -- # jq -r . 00:06:25.438 [2024-12-07 05:21:28.676421] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:25.438 [2024-12-07 05:21:28.676505] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1621836 ] 00:06:25.699 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.699 [2024-12-07 05:21:28.740032] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.699 [2024-12-07 05:21:28.803100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.699 05:21:28 -- accel/accel.sh@21 -- # val= 00:06:25.699 05:21:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.699 05:21:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.699 05:21:28 -- accel/accel.sh@20 -- # read -r var val 00:06:25.699 05:21:28 -- accel/accel.sh@21 -- # val= 00:06:25.699 05:21:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.699 05:21:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.699 05:21:28 -- accel/accel.sh@20 -- # read -r var val 00:06:25.699 05:21:28 -- accel/accel.sh@21 -- # val=0x1 00:06:25.699 05:21:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.699 05:21:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.699 05:21:28 -- accel/accel.sh@20 -- # read -r var val 00:06:25.699 05:21:28 -- accel/accel.sh@21 -- # val= 00:06:25.699 05:21:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.699 05:21:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.699 05:21:28 -- accel/accel.sh@20 -- # read -r var val 00:06:25.699 05:21:28 -- accel/accel.sh@21 -- # val= 00:06:25.699 05:21:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.699 05:21:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.699 05:21:28 -- accel/accel.sh@20 -- # read -r var val 00:06:25.699 05:21:28 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:25.699 05:21:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.699 05:21:28 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:25.699 05:21:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.699 05:21:28 -- accel/accel.sh@20 -- # read -r var val 00:06:25.699 05:21:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:25.699 05:21:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.699 05:21:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.699 05:21:28 -- accel/accel.sh@20 -- # read -r var val 00:06:25.699 05:21:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:25.699 05:21:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.699 05:21:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.699 05:21:28 -- accel/accel.sh@20 -- # read -r var val 00:06:25.699 05:21:28 -- accel/accel.sh@21 -- # val= 00:06:25.699 05:21:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.699 05:21:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.699 05:21:28 -- accel/accel.sh@20 -- # read -r var val 00:06:25.699 05:21:28 -- accel/accel.sh@21 -- # val=software 00:06:25.699 05:21:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.699 05:21:28 -- accel/accel.sh@23 -- # accel_module=software 00:06:25.699 05:21:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.699 05:21:28 -- accel/accel.sh@20 -- # read -r var val 00:06:25.699 05:21:28 -- accel/accel.sh@21 -- # val=32 00:06:25.699 05:21:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.699 05:21:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.699 05:21:28 -- accel/accel.sh@20 -- # read -r var val 00:06:25.699 05:21:28 -- accel/accel.sh@21 -- # val=32 00:06:25.699 05:21:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.699 05:21:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.699 05:21:28 -- accel/accel.sh@20 -- # read -r var val 00:06:25.699 05:21:28 -- accel/accel.sh@21 -- # val=1 00:06:25.699 05:21:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.699 05:21:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.699 05:21:28 -- accel/accel.sh@20 -- # read -r var val 00:06:25.699 05:21:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:25.699 05:21:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.699 05:21:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.699 05:21:28 -- accel/accel.sh@20 -- # read -r var val 00:06:25.699 05:21:28 -- accel/accel.sh@21 -- # val=No 00:06:25.699 05:21:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.699 05:21:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.699 05:21:28 -- accel/accel.sh@20 -- # read -r var val 00:06:25.699 05:21:28 -- accel/accel.sh@21 -- # val= 00:06:25.699 05:21:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.699 05:21:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.699 05:21:28 -- accel/accel.sh@20 -- # read -r var val 00:06:25.699 05:21:28 -- accel/accel.sh@21 -- # val= 00:06:25.699 05:21:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.699 05:21:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.699 05:21:28 -- accel/accel.sh@20 -- # read -r var val 00:06:27.082 05:21:29 -- accel/accel.sh@21 -- # val= 00:06:27.082 05:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.082 05:21:29 -- accel/accel.sh@20 -- # IFS=: 00:06:27.082 05:21:29 -- accel/accel.sh@20 -- # read -r var val 00:06:27.082 05:21:29 -- accel/accel.sh@21 -- # val= 00:06:27.082 05:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.082 05:21:29 -- accel/accel.sh@20 -- # IFS=: 00:06:27.082 05:21:29 -- accel/accel.sh@20 -- # read -r var val 00:06:27.082 05:21:29 -- accel/accel.sh@21 -- # val= 00:06:27.082 05:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.082 05:21:29 -- accel/accel.sh@20 -- # IFS=: 00:06:27.082 05:21:29 -- accel/accel.sh@20 -- # read -r var val 00:06:27.082 05:21:29 -- accel/accel.sh@21 -- # val= 00:06:27.082 05:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.082 05:21:29 -- accel/accel.sh@20 -- # IFS=: 00:06:27.082 05:21:29 -- accel/accel.sh@20 -- # read -r var val 00:06:27.082 05:21:29 -- accel/accel.sh@21 -- # val= 00:06:27.082 05:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.082 05:21:29 -- accel/accel.sh@20 -- # IFS=: 00:06:27.082 05:21:29 -- accel/accel.sh@20 -- # read -r var val 00:06:27.082 05:21:29 -- accel/accel.sh@21 -- # val= 00:06:27.082 05:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.082 05:21:29 -- accel/accel.sh@20 -- # IFS=: 00:06:27.082 05:21:29 -- accel/accel.sh@20 -- # read -r var val 00:06:27.082 05:21:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:27.082 05:21:29 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:27.082 05:21:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.082 00:06:27.082 real 0m2.565s 00:06:27.082 user 0m2.358s 00:06:27.082 sys 0m0.213s 00:06:27.082 05:21:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.082 05:21:29 -- common/autotest_common.sh@10 -- # set +x 00:06:27.082 ************************************ 00:06:27.082 END TEST accel_dif_generate_copy 00:06:27.082 ************************************ 00:06:27.082 05:21:29 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:27.082 05:21:29 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:27.082 05:21:29 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:27.082 05:21:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.083 05:21:29 -- common/autotest_common.sh@10 -- # set +x 00:06:27.083 ************************************ 00:06:27.083 START TEST accel_comp 00:06:27.083 ************************************ 00:06:27.083 05:21:29 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:27.083 05:21:29 -- accel/accel.sh@16 -- # local accel_opc 00:06:27.083 05:21:29 -- accel/accel.sh@17 -- # local accel_module 00:06:27.083 05:21:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:27.083 05:21:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:27.083 05:21:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.083 05:21:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.083 05:21:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.083 05:21:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.083 05:21:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.083 05:21:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.083 05:21:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.083 05:21:29 -- accel/accel.sh@42 -- # jq -r . 00:06:27.083 [2024-12-07 05:21:30.004557] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:27.083 [2024-12-07 05:21:30.004633] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1622175 ] 00:06:27.083 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.083 [2024-12-07 05:21:30.067960] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.083 [2024-12-07 05:21:30.132018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.025 05:21:31 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:28.025 00:06:28.025 SPDK Configuration: 00:06:28.025 Core mask: 0x1 00:06:28.025 00:06:28.025 Accel Perf Configuration: 00:06:28.025 Workload Type: compress 00:06:28.025 Transfer size: 4096 bytes 00:06:28.025 Vector count 1 00:06:28.025 Module: software 00:06:28.025 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:28.025 Queue depth: 32 00:06:28.025 Allocate depth: 32 00:06:28.025 # threads/core: 1 00:06:28.025 Run time: 1 seconds 00:06:28.025 Verify: No 00:06:28.025 00:06:28.025 Running for 1 seconds... 00:06:28.025 00:06:28.025 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:28.025 ------------------------------------------------------------------------------------ 00:06:28.025 0,0 47520/s 198 MiB/s 0 0 00:06:28.025 ==================================================================================== 00:06:28.025 Total 47520/s 185 MiB/s 0 0' 00:06:28.025 05:21:31 -- accel/accel.sh@20 -- # IFS=: 00:06:28.025 05:21:31 -- accel/accel.sh@20 -- # read -r var val 00:06:28.025 05:21:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:28.025 05:21:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:28.025 05:21:31 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.287 05:21:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.287 05:21:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.287 05:21:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.287 05:21:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.287 05:21:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.287 05:21:31 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.287 05:21:31 -- accel/accel.sh@42 -- # jq -r . 00:06:28.287 [2024-12-07 05:21:31.286780] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:28.287 [2024-12-07 05:21:31.286859] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1622514 ] 00:06:28.287 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.287 [2024-12-07 05:21:31.350563] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.287 [2024-12-07 05:21:31.413057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.287 05:21:31 -- accel/accel.sh@21 -- # val= 00:06:28.287 05:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # IFS=: 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # read -r var val 00:06:28.287 05:21:31 -- accel/accel.sh@21 -- # val= 00:06:28.287 05:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # IFS=: 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # read -r var val 00:06:28.287 05:21:31 -- accel/accel.sh@21 -- # val= 00:06:28.287 05:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # IFS=: 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # read -r var val 00:06:28.287 05:21:31 -- accel/accel.sh@21 -- # val=0x1 00:06:28.287 05:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # IFS=: 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # read -r var val 00:06:28.287 05:21:31 -- accel/accel.sh@21 -- # val= 00:06:28.287 05:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # IFS=: 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # read -r var val 00:06:28.287 05:21:31 -- accel/accel.sh@21 -- # val= 00:06:28.287 05:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # IFS=: 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # read -r var val 00:06:28.287 05:21:31 -- accel/accel.sh@21 -- # val=compress 00:06:28.287 05:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.287 05:21:31 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # IFS=: 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # read -r var val 00:06:28.287 05:21:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:28.287 05:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # IFS=: 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # read -r var val 00:06:28.287 05:21:31 -- accel/accel.sh@21 -- # val= 00:06:28.287 05:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # IFS=: 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # read -r var val 00:06:28.287 05:21:31 -- accel/accel.sh@21 -- # val=software 00:06:28.287 05:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.287 05:21:31 -- accel/accel.sh@23 -- # accel_module=software 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # IFS=: 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # read -r var val 00:06:28.287 05:21:31 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:28.287 05:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # IFS=: 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # read -r var val 00:06:28.287 05:21:31 -- accel/accel.sh@21 -- # val=32 00:06:28.287 05:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # IFS=: 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # read -r var val 00:06:28.287 05:21:31 -- accel/accel.sh@21 -- # val=32 00:06:28.287 05:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # IFS=: 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # read -r var val 00:06:28.287 05:21:31 -- accel/accel.sh@21 -- # val=1 00:06:28.287 05:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # IFS=: 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # read -r var val 00:06:28.287 05:21:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:28.287 05:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # IFS=: 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # read -r var val 00:06:28.287 05:21:31 -- accel/accel.sh@21 -- # val=No 00:06:28.287 05:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # IFS=: 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # read -r var val 00:06:28.287 05:21:31 -- accel/accel.sh@21 -- # val= 00:06:28.287 05:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # IFS=: 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # read -r var val 00:06:28.287 05:21:31 -- accel/accel.sh@21 -- # val= 00:06:28.287 05:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # IFS=: 00:06:28.287 05:21:31 -- accel/accel.sh@20 -- # read -r var val 00:06:29.672 05:21:32 -- accel/accel.sh@21 -- # val= 00:06:29.673 05:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.673 05:21:32 -- accel/accel.sh@20 -- # IFS=: 00:06:29.673 05:21:32 -- accel/accel.sh@20 -- # read -r var val 00:06:29.673 05:21:32 -- accel/accel.sh@21 -- # val= 00:06:29.673 05:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.673 05:21:32 -- accel/accel.sh@20 -- # IFS=: 00:06:29.673 05:21:32 -- accel/accel.sh@20 -- # read -r var val 00:06:29.673 05:21:32 -- accel/accel.sh@21 -- # val= 00:06:29.673 05:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.673 05:21:32 -- accel/accel.sh@20 -- # IFS=: 00:06:29.673 05:21:32 -- accel/accel.sh@20 -- # read -r var val 00:06:29.673 05:21:32 -- accel/accel.sh@21 -- # val= 00:06:29.673 05:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.673 05:21:32 -- accel/accel.sh@20 -- # IFS=: 00:06:29.673 05:21:32 -- accel/accel.sh@20 -- # read -r var val 00:06:29.673 05:21:32 -- accel/accel.sh@21 -- # val= 00:06:29.673 05:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.673 05:21:32 -- accel/accel.sh@20 -- # IFS=: 00:06:29.673 05:21:32 -- accel/accel.sh@20 -- # read -r var val 00:06:29.673 05:21:32 -- accel/accel.sh@21 -- # val= 00:06:29.673 05:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.673 05:21:32 -- accel/accel.sh@20 -- # IFS=: 00:06:29.673 05:21:32 -- accel/accel.sh@20 -- # read -r var val 00:06:29.673 05:21:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:29.673 05:21:32 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:29.673 05:21:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.673 00:06:29.673 real 0m2.569s 00:06:29.673 user 0m2.388s 00:06:29.673 sys 0m0.187s 00:06:29.673 05:21:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.673 05:21:32 -- common/autotest_common.sh@10 -- # set +x 00:06:29.673 ************************************ 00:06:29.673 END TEST accel_comp 00:06:29.673 ************************************ 00:06:29.673 05:21:32 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:29.673 05:21:32 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:29.673 05:21:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.673 05:21:32 -- common/autotest_common.sh@10 -- # set +x 00:06:29.673 ************************************ 00:06:29.673 START TEST accel_decomp 00:06:29.673 ************************************ 00:06:29.673 05:21:32 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:29.673 05:21:32 -- accel/accel.sh@16 -- # local accel_opc 00:06:29.673 05:21:32 -- accel/accel.sh@17 -- # local accel_module 00:06:29.673 05:21:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:29.673 05:21:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:29.673 05:21:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.673 05:21:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.673 05:21:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.673 05:21:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.673 05:21:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.673 05:21:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.673 05:21:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.673 05:21:32 -- accel/accel.sh@42 -- # jq -r . 00:06:29.673 [2024-12-07 05:21:32.617744] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:29.673 [2024-12-07 05:21:32.617818] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1622836 ] 00:06:29.673 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.673 [2024-12-07 05:21:32.681807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.673 [2024-12-07 05:21:32.747465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.055 05:21:33 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:31.055 00:06:31.055 SPDK Configuration: 00:06:31.055 Core mask: 0x1 00:06:31.055 00:06:31.055 Accel Perf Configuration: 00:06:31.055 Workload Type: decompress 00:06:31.055 Transfer size: 4096 bytes 00:06:31.055 Vector count 1 00:06:31.055 Module: software 00:06:31.055 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:31.055 Queue depth: 32 00:06:31.055 Allocate depth: 32 00:06:31.055 # threads/core: 1 00:06:31.055 Run time: 1 seconds 00:06:31.055 Verify: Yes 00:06:31.055 00:06:31.055 Running for 1 seconds... 00:06:31.055 00:06:31.055 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:31.055 ------------------------------------------------------------------------------------ 00:06:31.055 0,0 62624/s 115 MiB/s 0 0 00:06:31.055 ==================================================================================== 00:06:31.055 Total 62624/s 244 MiB/s 0 0' 00:06:31.055 05:21:33 -- accel/accel.sh@20 -- # IFS=: 00:06:31.055 05:21:33 -- accel/accel.sh@20 -- # read -r var val 00:06:31.055 05:21:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:31.055 05:21:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:31.055 05:21:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.055 05:21:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.055 05:21:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.055 05:21:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.055 05:21:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.055 05:21:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.055 05:21:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.055 05:21:33 -- accel/accel.sh@42 -- # jq -r . 00:06:31.055 [2024-12-07 05:21:33.903370] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:31.055 [2024-12-07 05:21:33.903443] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1622959 ] 00:06:31.055 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.055 [2024-12-07 05:21:33.966083] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.056 [2024-12-07 05:21:34.030364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.056 05:21:34 -- accel/accel.sh@21 -- # val= 00:06:31.056 05:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # IFS=: 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # read -r var val 00:06:31.056 05:21:34 -- accel/accel.sh@21 -- # val= 00:06:31.056 05:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # IFS=: 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # read -r var val 00:06:31.056 05:21:34 -- accel/accel.sh@21 -- # val= 00:06:31.056 05:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # IFS=: 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # read -r var val 00:06:31.056 05:21:34 -- accel/accel.sh@21 -- # val=0x1 00:06:31.056 05:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # IFS=: 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # read -r var val 00:06:31.056 05:21:34 -- accel/accel.sh@21 -- # val= 00:06:31.056 05:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # IFS=: 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # read -r var val 00:06:31.056 05:21:34 -- accel/accel.sh@21 -- # val= 00:06:31.056 05:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # IFS=: 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # read -r var val 00:06:31.056 05:21:34 -- accel/accel.sh@21 -- # val=decompress 00:06:31.056 05:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.056 05:21:34 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # IFS=: 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # read -r var val 00:06:31.056 05:21:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:31.056 05:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # IFS=: 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # read -r var val 00:06:31.056 05:21:34 -- accel/accel.sh@21 -- # val= 00:06:31.056 05:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # IFS=: 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # read -r var val 00:06:31.056 05:21:34 -- accel/accel.sh@21 -- # val=software 00:06:31.056 05:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.056 05:21:34 -- accel/accel.sh@23 -- # accel_module=software 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # IFS=: 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # read -r var val 00:06:31.056 05:21:34 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:31.056 05:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # IFS=: 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # read -r var val 00:06:31.056 05:21:34 -- accel/accel.sh@21 -- # val=32 00:06:31.056 05:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # IFS=: 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # read -r var val 00:06:31.056 05:21:34 -- accel/accel.sh@21 -- # val=32 00:06:31.056 05:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # IFS=: 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # read -r var val 00:06:31.056 05:21:34 -- accel/accel.sh@21 -- # val=1 00:06:31.056 05:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # IFS=: 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # read -r var val 00:06:31.056 05:21:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:31.056 05:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # IFS=: 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # read -r var val 00:06:31.056 05:21:34 -- accel/accel.sh@21 -- # val=Yes 00:06:31.056 05:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # IFS=: 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # read -r var val 00:06:31.056 05:21:34 -- accel/accel.sh@21 -- # val= 00:06:31.056 05:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # IFS=: 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # read -r var val 00:06:31.056 05:21:34 -- accel/accel.sh@21 -- # val= 00:06:31.056 05:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # IFS=: 00:06:31.056 05:21:34 -- accel/accel.sh@20 -- # read -r var val 00:06:31.997 05:21:35 -- accel/accel.sh@21 -- # val= 00:06:31.997 05:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.997 05:21:35 -- accel/accel.sh@20 -- # IFS=: 00:06:31.997 05:21:35 -- accel/accel.sh@20 -- # read -r var val 00:06:31.997 05:21:35 -- accel/accel.sh@21 -- # val= 00:06:31.997 05:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.997 05:21:35 -- accel/accel.sh@20 -- # IFS=: 00:06:31.997 05:21:35 -- accel/accel.sh@20 -- # read -r var val 00:06:31.997 05:21:35 -- accel/accel.sh@21 -- # val= 00:06:31.997 05:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.997 05:21:35 -- accel/accel.sh@20 -- # IFS=: 00:06:31.997 05:21:35 -- accel/accel.sh@20 -- # read -r var val 00:06:31.997 05:21:35 -- accel/accel.sh@21 -- # val= 00:06:31.997 05:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.997 05:21:35 -- accel/accel.sh@20 -- # IFS=: 00:06:31.997 05:21:35 -- accel/accel.sh@20 -- # read -r var val 00:06:31.997 05:21:35 -- accel/accel.sh@21 -- # val= 00:06:31.997 05:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.997 05:21:35 -- accel/accel.sh@20 -- # IFS=: 00:06:31.997 05:21:35 -- accel/accel.sh@20 -- # read -r var val 00:06:31.997 05:21:35 -- accel/accel.sh@21 -- # val= 00:06:31.997 05:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.997 05:21:35 -- accel/accel.sh@20 -- # IFS=: 00:06:31.997 05:21:35 -- accel/accel.sh@20 -- # read -r var val 00:06:31.997 05:21:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:31.997 05:21:35 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:31.997 05:21:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.997 00:06:31.997 real 0m2.574s 00:06:31.997 user 0m2.382s 00:06:31.997 sys 0m0.200s 00:06:31.997 05:21:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:31.997 05:21:35 -- common/autotest_common.sh@10 -- # set +x 00:06:31.997 ************************************ 00:06:31.997 END TEST accel_decomp 00:06:31.997 ************************************ 00:06:31.997 05:21:35 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:31.997 05:21:35 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:31.997 05:21:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.997 05:21:35 -- common/autotest_common.sh@10 -- # set +x 00:06:31.997 ************************************ 00:06:31.997 START TEST accel_decmop_full 00:06:31.997 ************************************ 00:06:31.997 05:21:35 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:31.997 05:21:35 -- accel/accel.sh@16 -- # local accel_opc 00:06:31.997 05:21:35 -- accel/accel.sh@17 -- # local accel_module 00:06:31.997 05:21:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:31.997 05:21:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:31.997 05:21:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.998 05:21:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.998 05:21:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.998 05:21:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.998 05:21:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.998 05:21:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.998 05:21:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.998 05:21:35 -- accel/accel.sh@42 -- # jq -r . 00:06:32.257 [2024-12-07 05:21:35.236809] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:32.257 [2024-12-07 05:21:35.236931] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1623239 ] 00:06:32.257 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.257 [2024-12-07 05:21:35.306630] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.257 [2024-12-07 05:21:35.373645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.642 05:21:36 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:33.642 00:06:33.642 SPDK Configuration: 00:06:33.642 Core mask: 0x1 00:06:33.642 00:06:33.642 Accel Perf Configuration: 00:06:33.642 Workload Type: decompress 00:06:33.642 Transfer size: 111250 bytes 00:06:33.642 Vector count 1 00:06:33.642 Module: software 00:06:33.642 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.642 Queue depth: 32 00:06:33.642 Allocate depth: 32 00:06:33.642 # threads/core: 1 00:06:33.642 Run time: 1 seconds 00:06:33.642 Verify: Yes 00:06:33.642 00:06:33.642 Running for 1 seconds... 00:06:33.642 00:06:33.642 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:33.642 ------------------------------------------------------------------------------------ 00:06:33.642 0,0 4032/s 166 MiB/s 0 0 00:06:33.642 ==================================================================================== 00:06:33.642 Total 4032/s 427 MiB/s 0 0' 00:06:33.642 05:21:36 -- accel/accel.sh@20 -- # IFS=: 00:06:33.642 05:21:36 -- accel/accel.sh@20 -- # read -r var val 00:06:33.642 05:21:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:33.642 05:21:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:33.642 05:21:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.642 05:21:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.642 05:21:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.642 05:21:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.642 05:21:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.642 05:21:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.642 05:21:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.642 05:21:36 -- accel/accel.sh@42 -- # jq -r . 00:06:33.642 [2024-12-07 05:21:36.535901] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:33.642 [2024-12-07 05:21:36.535975] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1623575 ] 00:06:33.642 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.642 [2024-12-07 05:21:36.598516] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.642 [2024-12-07 05:21:36.660741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.642 05:21:36 -- accel/accel.sh@21 -- # val= 00:06:33.642 05:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.642 05:21:36 -- accel/accel.sh@20 -- # IFS=: 00:06:33.642 05:21:36 -- accel/accel.sh@20 -- # read -r var val 00:06:33.642 05:21:36 -- accel/accel.sh@21 -- # val= 00:06:33.642 05:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.642 05:21:36 -- accel/accel.sh@20 -- # IFS=: 00:06:33.642 05:21:36 -- accel/accel.sh@20 -- # read -r var val 00:06:33.642 05:21:36 -- accel/accel.sh@21 -- # val= 00:06:33.642 05:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.642 05:21:36 -- accel/accel.sh@20 -- # IFS=: 00:06:33.642 05:21:36 -- accel/accel.sh@20 -- # read -r var val 00:06:33.642 05:21:36 -- accel/accel.sh@21 -- # val=0x1 00:06:33.642 05:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.642 05:21:36 -- accel/accel.sh@20 -- # IFS=: 00:06:33.642 05:21:36 -- accel/accel.sh@20 -- # read -r var val 00:06:33.642 05:21:36 -- accel/accel.sh@21 -- # val= 00:06:33.642 05:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.642 05:21:36 -- accel/accel.sh@20 -- # IFS=: 00:06:33.642 05:21:36 -- accel/accel.sh@20 -- # read -r var val 00:06:33.642 05:21:36 -- accel/accel.sh@21 -- # val= 00:06:33.642 05:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.642 05:21:36 -- accel/accel.sh@20 -- # IFS=: 00:06:33.643 05:21:36 -- accel/accel.sh@20 -- # read -r var val 00:06:33.643 05:21:36 -- accel/accel.sh@21 -- # val=decompress 00:06:33.643 05:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.643 05:21:36 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:33.643 05:21:36 -- accel/accel.sh@20 -- # IFS=: 00:06:33.643 05:21:36 -- accel/accel.sh@20 -- # read -r var val 00:06:33.643 05:21:36 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:33.643 05:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.643 05:21:36 -- accel/accel.sh@20 -- # IFS=: 00:06:33.643 05:21:36 -- accel/accel.sh@20 -- # read -r var val 00:06:33.643 05:21:36 -- accel/accel.sh@21 -- # val= 00:06:33.643 05:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.643 05:21:36 -- accel/accel.sh@20 -- # IFS=: 00:06:33.643 05:21:36 -- accel/accel.sh@20 -- # read -r var val 00:06:33.643 05:21:36 -- accel/accel.sh@21 -- # val=software 00:06:33.643 05:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.643 05:21:36 -- accel/accel.sh@23 -- # accel_module=software 00:06:33.643 05:21:36 -- accel/accel.sh@20 -- # IFS=: 00:06:33.643 05:21:36 -- accel/accel.sh@20 -- # read -r var val 00:06:33.643 05:21:36 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.643 05:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.643 05:21:36 -- accel/accel.sh@20 -- # IFS=: 00:06:33.643 05:21:36 -- accel/accel.sh@20 -- # read -r var val 00:06:33.643 05:21:36 -- accel/accel.sh@21 -- # val=32 00:06:33.643 05:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.643 05:21:36 -- accel/accel.sh@20 -- # IFS=: 00:06:33.643 05:21:36 -- accel/accel.sh@20 -- # read -r var val 00:06:33.643 05:21:36 -- accel/accel.sh@21 -- # val=32 00:06:33.643 05:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.643 05:21:36 -- accel/accel.sh@20 -- # IFS=: 00:06:33.643 05:21:36 -- accel/accel.sh@20 -- # read -r var val 00:06:33.643 05:21:36 -- accel/accel.sh@21 -- # val=1 00:06:33.643 05:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.643 05:21:36 -- accel/accel.sh@20 -- # IFS=: 00:06:33.643 05:21:36 -- accel/accel.sh@20 -- # read -r var val 00:06:33.643 05:21:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:33.643 05:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.643 05:21:36 -- accel/accel.sh@20 -- # IFS=: 00:06:33.643 05:21:36 -- accel/accel.sh@20 -- # read -r var val 00:06:33.643 05:21:36 -- accel/accel.sh@21 -- # val=Yes 00:06:33.643 05:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.643 05:21:36 -- accel/accel.sh@20 -- # IFS=: 00:06:33.643 05:21:36 -- accel/accel.sh@20 -- # read -r var val 00:06:33.643 05:21:36 -- accel/accel.sh@21 -- # val= 00:06:33.643 05:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.643 05:21:36 -- accel/accel.sh@20 -- # IFS=: 00:06:33.643 05:21:36 -- accel/accel.sh@20 -- # read -r var val 00:06:33.643 05:21:36 -- accel/accel.sh@21 -- # val= 00:06:33.643 05:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.643 05:21:36 -- accel/accel.sh@20 -- # IFS=: 00:06:33.643 05:21:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.585 05:21:37 -- accel/accel.sh@21 -- # val= 00:06:34.585 05:21:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.585 05:21:37 -- accel/accel.sh@20 -- # IFS=: 00:06:34.585 05:21:37 -- accel/accel.sh@20 -- # read -r var val 00:06:34.585 05:21:37 -- accel/accel.sh@21 -- # val= 00:06:34.585 05:21:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.585 05:21:37 -- accel/accel.sh@20 -- # IFS=: 00:06:34.585 05:21:37 -- accel/accel.sh@20 -- # read -r var val 00:06:34.585 05:21:37 -- accel/accel.sh@21 -- # val= 00:06:34.585 05:21:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.585 05:21:37 -- accel/accel.sh@20 -- # IFS=: 00:06:34.585 05:21:37 -- accel/accel.sh@20 -- # read -r var val 00:06:34.585 05:21:37 -- accel/accel.sh@21 -- # val= 00:06:34.585 05:21:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.585 05:21:37 -- accel/accel.sh@20 -- # IFS=: 00:06:34.585 05:21:37 -- accel/accel.sh@20 -- # read -r var val 00:06:34.585 05:21:37 -- accel/accel.sh@21 -- # val= 00:06:34.585 05:21:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.585 05:21:37 -- accel/accel.sh@20 -- # IFS=: 00:06:34.585 05:21:37 -- accel/accel.sh@20 -- # read -r var val 00:06:34.585 05:21:37 -- accel/accel.sh@21 -- # val= 00:06:34.585 05:21:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.585 05:21:37 -- accel/accel.sh@20 -- # IFS=: 00:06:34.585 05:21:37 -- accel/accel.sh@20 -- # read -r var val 00:06:34.585 05:21:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:34.585 05:21:37 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:34.585 05:21:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.585 00:06:34.585 real 0m2.598s 00:06:34.585 user 0m2.401s 00:06:34.585 sys 0m0.204s 00:06:34.585 05:21:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:34.585 05:21:37 -- common/autotest_common.sh@10 -- # set +x 00:06:34.585 ************************************ 00:06:34.585 END TEST accel_decmop_full 00:06:34.585 ************************************ 00:06:34.847 05:21:37 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:34.847 05:21:37 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:34.847 05:21:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.847 05:21:37 -- common/autotest_common.sh@10 -- # set +x 00:06:34.847 ************************************ 00:06:34.847 START TEST accel_decomp_mcore 00:06:34.847 ************************************ 00:06:34.847 05:21:37 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:34.847 05:21:37 -- accel/accel.sh@16 -- # local accel_opc 00:06:34.847 05:21:37 -- accel/accel.sh@17 -- # local accel_module 00:06:34.847 05:21:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:34.847 05:21:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:34.847 05:21:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.847 05:21:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.847 05:21:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.847 05:21:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.847 05:21:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.847 05:21:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.847 05:21:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.847 05:21:37 -- accel/accel.sh@42 -- # jq -r . 00:06:34.847 [2024-12-07 05:21:37.875646] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:34.847 [2024-12-07 05:21:37.875720] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1623927 ] 00:06:34.847 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.847 [2024-12-07 05:21:37.939607] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.847 [2024-12-07 05:21:38.007132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.847 [2024-12-07 05:21:38.007247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.847 [2024-12-07 05:21:38.007403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.847 [2024-12-07 05:21:38.007403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:36.237 05:21:39 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:36.237 00:06:36.237 SPDK Configuration: 00:06:36.237 Core mask: 0xf 00:06:36.237 00:06:36.237 Accel Perf Configuration: 00:06:36.237 Workload Type: decompress 00:06:36.237 Transfer size: 4096 bytes 00:06:36.237 Vector count 1 00:06:36.237 Module: software 00:06:36.237 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.237 Queue depth: 32 00:06:36.237 Allocate depth: 32 00:06:36.237 # threads/core: 1 00:06:36.237 Run time: 1 seconds 00:06:36.237 Verify: Yes 00:06:36.237 00:06:36.237 Running for 1 seconds... 00:06:36.237 00:06:36.237 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:36.237 ------------------------------------------------------------------------------------ 00:06:36.237 0,0 58592/s 107 MiB/s 0 0 00:06:36.237 3,0 58976/s 108 MiB/s 0 0 00:06:36.237 2,0 85984/s 158 MiB/s 0 0 00:06:36.237 1,0 58912/s 108 MiB/s 0 0 00:06:36.237 ==================================================================================== 00:06:36.237 Total 262464/s 1025 MiB/s 0 0' 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # IFS=: 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # read -r var val 00:06:36.237 05:21:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:36.237 05:21:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:36.237 05:21:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.237 05:21:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.237 05:21:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.237 05:21:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.237 05:21:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.237 05:21:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.237 05:21:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.237 05:21:39 -- accel/accel.sh@42 -- # jq -r . 00:06:36.237 [2024-12-07 05:21:39.170405] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:36.237 [2024-12-07 05:21:39.170533] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1624120 ] 00:06:36.237 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.237 [2024-12-07 05:21:39.239274] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:36.237 [2024-12-07 05:21:39.303596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.237 [2024-12-07 05:21:39.303710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.237 [2024-12-07 05:21:39.303863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.237 [2024-12-07 05:21:39.303863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:36.237 05:21:39 -- accel/accel.sh@21 -- # val= 00:06:36.237 05:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # IFS=: 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # read -r var val 00:06:36.237 05:21:39 -- accel/accel.sh@21 -- # val= 00:06:36.237 05:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # IFS=: 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # read -r var val 00:06:36.237 05:21:39 -- accel/accel.sh@21 -- # val= 00:06:36.237 05:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # IFS=: 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # read -r var val 00:06:36.237 05:21:39 -- accel/accel.sh@21 -- # val=0xf 00:06:36.237 05:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # IFS=: 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # read -r var val 00:06:36.237 05:21:39 -- accel/accel.sh@21 -- # val= 00:06:36.237 05:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # IFS=: 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # read -r var val 00:06:36.237 05:21:39 -- accel/accel.sh@21 -- # val= 00:06:36.237 05:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # IFS=: 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # read -r var val 00:06:36.237 05:21:39 -- accel/accel.sh@21 -- # val=decompress 00:06:36.237 05:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.237 05:21:39 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # IFS=: 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # read -r var val 00:06:36.237 05:21:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:36.237 05:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # IFS=: 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # read -r var val 00:06:36.237 05:21:39 -- accel/accel.sh@21 -- # val= 00:06:36.237 05:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # IFS=: 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # read -r var val 00:06:36.237 05:21:39 -- accel/accel.sh@21 -- # val=software 00:06:36.237 05:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.237 05:21:39 -- accel/accel.sh@23 -- # accel_module=software 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # IFS=: 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # read -r var val 00:06:36.237 05:21:39 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.237 05:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # IFS=: 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # read -r var val 00:06:36.237 05:21:39 -- accel/accel.sh@21 -- # val=32 00:06:36.237 05:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # IFS=: 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # read -r var val 00:06:36.237 05:21:39 -- accel/accel.sh@21 -- # val=32 00:06:36.237 05:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # IFS=: 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # read -r var val 00:06:36.237 05:21:39 -- accel/accel.sh@21 -- # val=1 00:06:36.237 05:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # IFS=: 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # read -r var val 00:06:36.237 05:21:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:36.237 05:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # IFS=: 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # read -r var val 00:06:36.237 05:21:39 -- accel/accel.sh@21 -- # val=Yes 00:06:36.237 05:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # IFS=: 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # read -r var val 00:06:36.237 05:21:39 -- accel/accel.sh@21 -- # val= 00:06:36.237 05:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # IFS=: 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # read -r var val 00:06:36.237 05:21:39 -- accel/accel.sh@21 -- # val= 00:06:36.237 05:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # IFS=: 00:06:36.237 05:21:39 -- accel/accel.sh@20 -- # read -r var val 00:06:37.621 05:21:40 -- accel/accel.sh@21 -- # val= 00:06:37.621 05:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.621 05:21:40 -- accel/accel.sh@20 -- # IFS=: 00:06:37.621 05:21:40 -- accel/accel.sh@20 -- # read -r var val 00:06:37.621 05:21:40 -- accel/accel.sh@21 -- # val= 00:06:37.621 05:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.621 05:21:40 -- accel/accel.sh@20 -- # IFS=: 00:06:37.621 05:21:40 -- accel/accel.sh@20 -- # read -r var val 00:06:37.621 05:21:40 -- accel/accel.sh@21 -- # val= 00:06:37.621 05:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.621 05:21:40 -- accel/accel.sh@20 -- # IFS=: 00:06:37.621 05:21:40 -- accel/accel.sh@20 -- # read -r var val 00:06:37.621 05:21:40 -- accel/accel.sh@21 -- # val= 00:06:37.621 05:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.621 05:21:40 -- accel/accel.sh@20 -- # IFS=: 00:06:37.621 05:21:40 -- accel/accel.sh@20 -- # read -r var val 00:06:37.621 05:21:40 -- accel/accel.sh@21 -- # val= 00:06:37.621 05:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.621 05:21:40 -- accel/accel.sh@20 -- # IFS=: 00:06:37.621 05:21:40 -- accel/accel.sh@20 -- # read -r var val 00:06:37.621 05:21:40 -- accel/accel.sh@21 -- # val= 00:06:37.621 05:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.621 05:21:40 -- accel/accel.sh@20 -- # IFS=: 00:06:37.621 05:21:40 -- accel/accel.sh@20 -- # read -r var val 00:06:37.621 05:21:40 -- accel/accel.sh@21 -- # val= 00:06:37.621 05:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.621 05:21:40 -- accel/accel.sh@20 -- # IFS=: 00:06:37.621 05:21:40 -- accel/accel.sh@20 -- # read -r var val 00:06:37.621 05:21:40 -- accel/accel.sh@21 -- # val= 00:06:37.621 05:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.621 05:21:40 -- accel/accel.sh@20 -- # IFS=: 00:06:37.621 05:21:40 -- accel/accel.sh@20 -- # read -r var val 00:06:37.621 05:21:40 -- accel/accel.sh@21 -- # val= 00:06:37.621 05:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.621 05:21:40 -- accel/accel.sh@20 -- # IFS=: 00:06:37.621 05:21:40 -- accel/accel.sh@20 -- # read -r var val 00:06:37.621 05:21:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:37.621 05:21:40 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:37.621 05:21:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.621 00:06:37.621 real 0m2.595s 00:06:37.621 user 0m8.831s 00:06:37.621 sys 0m0.233s 00:06:37.621 05:21:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:37.621 05:21:40 -- common/autotest_common.sh@10 -- # set +x 00:06:37.621 ************************************ 00:06:37.621 END TEST accel_decomp_mcore 00:06:37.621 ************************************ 00:06:37.621 05:21:40 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:37.621 05:21:40 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:37.621 05:21:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.621 05:21:40 -- common/autotest_common.sh@10 -- # set +x 00:06:37.621 ************************************ 00:06:37.621 START TEST accel_decomp_full_mcore 00:06:37.621 ************************************ 00:06:37.621 05:21:40 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:37.621 05:21:40 -- accel/accel.sh@16 -- # local accel_opc 00:06:37.621 05:21:40 -- accel/accel.sh@17 -- # local accel_module 00:06:37.621 05:21:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:37.621 05:21:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:37.621 05:21:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.621 05:21:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.621 05:21:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.621 05:21:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.621 05:21:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.621 05:21:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.621 05:21:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.621 05:21:40 -- accel/accel.sh@42 -- # jq -r . 00:06:37.621 [2024-12-07 05:21:40.513878] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:37.621 [2024-12-07 05:21:40.513967] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1624321 ] 00:06:37.621 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.621 [2024-12-07 05:21:40.578289] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:37.621 [2024-12-07 05:21:40.643451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.621 [2024-12-07 05:21:40.643568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.621 [2024-12-07 05:21:40.643714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.621 [2024-12-07 05:21:40.643715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.561 05:21:41 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:38.561 00:06:38.561 SPDK Configuration: 00:06:38.561 Core mask: 0xf 00:06:38.561 00:06:38.561 Accel Perf Configuration: 00:06:38.561 Workload Type: decompress 00:06:38.561 Transfer size: 111250 bytes 00:06:38.561 Vector count 1 00:06:38.561 Module: software 00:06:38.561 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:38.561 Queue depth: 32 00:06:38.561 Allocate depth: 32 00:06:38.561 # threads/core: 1 00:06:38.561 Run time: 1 seconds 00:06:38.561 Verify: Yes 00:06:38.561 00:06:38.561 Running for 1 seconds... 00:06:38.561 00:06:38.561 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:38.561 ------------------------------------------------------------------------------------ 00:06:38.561 0,0 4064/s 167 MiB/s 0 0 00:06:38.561 3,0 4064/s 167 MiB/s 0 0 00:06:38.562 2,0 5920/s 244 MiB/s 0 0 00:06:38.562 1,0 4064/s 167 MiB/s 0 0 00:06:38.562 ==================================================================================== 00:06:38.562 Total 18112/s 1921 MiB/s 0 0' 00:06:38.562 05:21:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.562 05:21:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.562 05:21:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:38.562 05:21:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:38.562 05:21:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.562 05:21:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.562 05:21:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.562 05:21:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.562 05:21:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.562 05:21:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.562 05:21:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.562 05:21:41 -- accel/accel.sh@42 -- # jq -r . 00:06:38.874 [2024-12-07 05:21:41.819612] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:38.874 [2024-12-07 05:21:41.819687] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1624640 ] 00:06:38.874 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.874 [2024-12-07 05:21:41.882870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:38.874 [2024-12-07 05:21:41.947944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.874 [2024-12-07 05:21:41.948084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.874 [2024-12-07 05:21:41.948143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.874 [2024-12-07 05:21:41.948143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.874 05:21:41 -- accel/accel.sh@21 -- # val= 00:06:38.874 05:21:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.874 05:21:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.874 05:21:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.874 05:21:41 -- accel/accel.sh@21 -- # val= 00:06:38.874 05:21:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.874 05:21:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.874 05:21:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.874 05:21:41 -- accel/accel.sh@21 -- # val= 00:06:38.874 05:21:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.874 05:21:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.874 05:21:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.874 05:21:41 -- accel/accel.sh@21 -- # val=0xf 00:06:38.874 05:21:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.874 05:21:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.874 05:21:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.874 05:21:41 -- accel/accel.sh@21 -- # val= 00:06:38.874 05:21:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.874 05:21:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.874 05:21:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.874 05:21:41 -- accel/accel.sh@21 -- # val= 00:06:38.874 05:21:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.874 05:21:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.874 05:21:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.874 05:21:41 -- accel/accel.sh@21 -- # val=decompress 00:06:38.874 05:21:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.874 05:21:41 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:38.874 05:21:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.874 05:21:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.874 05:21:41 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:38.874 05:21:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.874 05:21:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.874 05:21:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.874 05:21:41 -- accel/accel.sh@21 -- # val= 00:06:38.874 05:21:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.874 05:21:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.874 05:21:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.874 05:21:41 -- accel/accel.sh@21 -- # val=software 00:06:38.874 05:21:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.874 05:21:41 -- accel/accel.sh@23 -- # accel_module=software 00:06:38.874 05:21:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.875 05:21:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.875 05:21:41 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:38.875 05:21:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.875 05:21:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.875 05:21:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.875 05:21:41 -- accel/accel.sh@21 -- # val=32 00:06:38.875 05:21:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.875 05:21:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.875 05:21:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.875 05:21:41 -- accel/accel.sh@21 -- # val=32 00:06:38.875 05:21:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.875 05:21:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.875 05:21:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.875 05:21:41 -- accel/accel.sh@21 -- # val=1 00:06:38.875 05:21:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.875 05:21:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.875 05:21:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.875 05:21:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:38.875 05:21:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.875 05:21:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.875 05:21:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.875 05:21:41 -- accel/accel.sh@21 -- # val=Yes 00:06:38.875 05:21:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.875 05:21:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.875 05:21:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.875 05:21:41 -- accel/accel.sh@21 -- # val= 00:06:38.875 05:21:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.875 05:21:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.875 05:21:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.875 05:21:41 -- accel/accel.sh@21 -- # val= 00:06:38.875 05:21:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.875 05:21:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.875 05:21:41 -- accel/accel.sh@20 -- # read -r var val 00:06:40.256 05:21:43 -- accel/accel.sh@21 -- # val= 00:06:40.256 05:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.256 05:21:43 -- accel/accel.sh@20 -- # IFS=: 00:06:40.256 05:21:43 -- accel/accel.sh@20 -- # read -r var val 00:06:40.256 05:21:43 -- accel/accel.sh@21 -- # val= 00:06:40.256 05:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.256 05:21:43 -- accel/accel.sh@20 -- # IFS=: 00:06:40.256 05:21:43 -- accel/accel.sh@20 -- # read -r var val 00:06:40.256 05:21:43 -- accel/accel.sh@21 -- # val= 00:06:40.256 05:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.256 05:21:43 -- accel/accel.sh@20 -- # IFS=: 00:06:40.256 05:21:43 -- accel/accel.sh@20 -- # read -r var val 00:06:40.256 05:21:43 -- accel/accel.sh@21 -- # val= 00:06:40.256 05:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.256 05:21:43 -- accel/accel.sh@20 -- # IFS=: 00:06:40.256 05:21:43 -- accel/accel.sh@20 -- # read -r var val 00:06:40.256 05:21:43 -- accel/accel.sh@21 -- # val= 00:06:40.256 05:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.256 05:21:43 -- accel/accel.sh@20 -- # IFS=: 00:06:40.256 05:21:43 -- accel/accel.sh@20 -- # read -r var val 00:06:40.256 05:21:43 -- accel/accel.sh@21 -- # val= 00:06:40.256 05:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.256 05:21:43 -- accel/accel.sh@20 -- # IFS=: 00:06:40.256 05:21:43 -- accel/accel.sh@20 -- # read -r var val 00:06:40.256 05:21:43 -- accel/accel.sh@21 -- # val= 00:06:40.256 05:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.256 05:21:43 -- accel/accel.sh@20 -- # IFS=: 00:06:40.256 05:21:43 -- accel/accel.sh@20 -- # read -r var val 00:06:40.256 05:21:43 -- accel/accel.sh@21 -- # val= 00:06:40.256 05:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.256 05:21:43 -- accel/accel.sh@20 -- # IFS=: 00:06:40.256 05:21:43 -- accel/accel.sh@20 -- # read -r var val 00:06:40.256 05:21:43 -- accel/accel.sh@21 -- # val= 00:06:40.256 05:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.256 05:21:43 -- accel/accel.sh@20 -- # IFS=: 00:06:40.256 05:21:43 -- accel/accel.sh@20 -- # read -r var val 00:06:40.256 05:21:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:40.256 05:21:43 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:40.256 05:21:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.256 00:06:40.256 real 0m2.615s 00:06:40.256 user 0m8.946s 00:06:40.256 sys 0m0.226s 00:06:40.256 05:21:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:40.256 05:21:43 -- common/autotest_common.sh@10 -- # set +x 00:06:40.256 ************************************ 00:06:40.256 END TEST accel_decomp_full_mcore 00:06:40.256 ************************************ 00:06:40.256 05:21:43 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:40.256 05:21:43 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:40.256 05:21:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.256 05:21:43 -- common/autotest_common.sh@10 -- # set +x 00:06:40.256 ************************************ 00:06:40.256 START TEST accel_decomp_mthread 00:06:40.256 ************************************ 00:06:40.256 05:21:43 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:40.256 05:21:43 -- accel/accel.sh@16 -- # local accel_opc 00:06:40.256 05:21:43 -- accel/accel.sh@17 -- # local accel_module 00:06:40.256 05:21:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:40.256 05:21:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:40.256 05:21:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.256 05:21:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.256 05:21:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.256 05:21:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.256 05:21:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.256 05:21:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.256 05:21:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.256 05:21:43 -- accel/accel.sh@42 -- # jq -r . 00:06:40.256 [2024-12-07 05:21:43.173347] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:40.256 [2024-12-07 05:21:43.173459] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1624994 ] 00:06:40.256 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.256 [2024-12-07 05:21:43.238522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.256 [2024-12-07 05:21:43.303660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.197 05:21:44 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:41.197 00:06:41.197 SPDK Configuration: 00:06:41.197 Core mask: 0x1 00:06:41.197 00:06:41.197 Accel Perf Configuration: 00:06:41.197 Workload Type: decompress 00:06:41.197 Transfer size: 4096 bytes 00:06:41.197 Vector count 1 00:06:41.197 Module: software 00:06:41.197 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:41.197 Queue depth: 32 00:06:41.197 Allocate depth: 32 00:06:41.197 # threads/core: 2 00:06:41.197 Run time: 1 seconds 00:06:41.197 Verify: Yes 00:06:41.197 00:06:41.197 Running for 1 seconds... 00:06:41.197 00:06:41.197 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:41.197 ------------------------------------------------------------------------------------ 00:06:41.197 0,1 31616/s 58 MiB/s 0 0 00:06:41.197 0,0 31520/s 58 MiB/s 0 0 00:06:41.197 ==================================================================================== 00:06:41.197 Total 63136/s 246 MiB/s 0 0' 00:06:41.458 05:21:44 -- accel/accel.sh@20 -- # IFS=: 00:06:41.458 05:21:44 -- accel/accel.sh@20 -- # read -r var val 00:06:41.458 05:21:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:41.458 05:21:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:41.458 05:21:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.458 05:21:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.458 05:21:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.458 05:21:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.458 05:21:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.458 05:21:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.458 05:21:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.458 05:21:44 -- accel/accel.sh@42 -- # jq -r . 00:06:41.459 [2024-12-07 05:21:44.461208] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:41.459 [2024-12-07 05:21:44.461282] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1625301 ] 00:06:41.459 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.459 [2024-12-07 05:21:44.523434] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.459 [2024-12-07 05:21:44.585639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.459 05:21:44 -- accel/accel.sh@21 -- # val= 00:06:41.459 05:21:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # IFS=: 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # read -r var val 00:06:41.459 05:21:44 -- accel/accel.sh@21 -- # val= 00:06:41.459 05:21:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # IFS=: 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # read -r var val 00:06:41.459 05:21:44 -- accel/accel.sh@21 -- # val= 00:06:41.459 05:21:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # IFS=: 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # read -r var val 00:06:41.459 05:21:44 -- accel/accel.sh@21 -- # val=0x1 00:06:41.459 05:21:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # IFS=: 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # read -r var val 00:06:41.459 05:21:44 -- accel/accel.sh@21 -- # val= 00:06:41.459 05:21:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # IFS=: 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # read -r var val 00:06:41.459 05:21:44 -- accel/accel.sh@21 -- # val= 00:06:41.459 05:21:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # IFS=: 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # read -r var val 00:06:41.459 05:21:44 -- accel/accel.sh@21 -- # val=decompress 00:06:41.459 05:21:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.459 05:21:44 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # IFS=: 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # read -r var val 00:06:41.459 05:21:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:41.459 05:21:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # IFS=: 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # read -r var val 00:06:41.459 05:21:44 -- accel/accel.sh@21 -- # val= 00:06:41.459 05:21:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # IFS=: 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # read -r var val 00:06:41.459 05:21:44 -- accel/accel.sh@21 -- # val=software 00:06:41.459 05:21:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.459 05:21:44 -- accel/accel.sh@23 -- # accel_module=software 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # IFS=: 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # read -r var val 00:06:41.459 05:21:44 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:41.459 05:21:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # IFS=: 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # read -r var val 00:06:41.459 05:21:44 -- accel/accel.sh@21 -- # val=32 00:06:41.459 05:21:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # IFS=: 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # read -r var val 00:06:41.459 05:21:44 -- accel/accel.sh@21 -- # val=32 00:06:41.459 05:21:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # IFS=: 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # read -r var val 00:06:41.459 05:21:44 -- accel/accel.sh@21 -- # val=2 00:06:41.459 05:21:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # IFS=: 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # read -r var val 00:06:41.459 05:21:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:41.459 05:21:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # IFS=: 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # read -r var val 00:06:41.459 05:21:44 -- accel/accel.sh@21 -- # val=Yes 00:06:41.459 05:21:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # IFS=: 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # read -r var val 00:06:41.459 05:21:44 -- accel/accel.sh@21 -- # val= 00:06:41.459 05:21:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # IFS=: 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # read -r var val 00:06:41.459 05:21:44 -- accel/accel.sh@21 -- # val= 00:06:41.459 05:21:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # IFS=: 00:06:41.459 05:21:44 -- accel/accel.sh@20 -- # read -r var val 00:06:42.847 05:21:45 -- accel/accel.sh@21 -- # val= 00:06:42.847 05:21:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.847 05:21:45 -- accel/accel.sh@20 -- # IFS=: 00:06:42.847 05:21:45 -- accel/accel.sh@20 -- # read -r var val 00:06:42.847 05:21:45 -- accel/accel.sh@21 -- # val= 00:06:42.847 05:21:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.847 05:21:45 -- accel/accel.sh@20 -- # IFS=: 00:06:42.847 05:21:45 -- accel/accel.sh@20 -- # read -r var val 00:06:42.847 05:21:45 -- accel/accel.sh@21 -- # val= 00:06:42.847 05:21:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.847 05:21:45 -- accel/accel.sh@20 -- # IFS=: 00:06:42.847 05:21:45 -- accel/accel.sh@20 -- # read -r var val 00:06:42.847 05:21:45 -- accel/accel.sh@21 -- # val= 00:06:42.847 05:21:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.847 05:21:45 -- accel/accel.sh@20 -- # IFS=: 00:06:42.847 05:21:45 -- accel/accel.sh@20 -- # read -r var val 00:06:42.847 05:21:45 -- accel/accel.sh@21 -- # val= 00:06:42.847 05:21:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.847 05:21:45 -- accel/accel.sh@20 -- # IFS=: 00:06:42.847 05:21:45 -- accel/accel.sh@20 -- # read -r var val 00:06:42.847 05:21:45 -- accel/accel.sh@21 -- # val= 00:06:42.847 05:21:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.847 05:21:45 -- accel/accel.sh@20 -- # IFS=: 00:06:42.847 05:21:45 -- accel/accel.sh@20 -- # read -r var val 00:06:42.847 05:21:45 -- accel/accel.sh@21 -- # val= 00:06:42.847 05:21:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.847 05:21:45 -- accel/accel.sh@20 -- # IFS=: 00:06:42.847 05:21:45 -- accel/accel.sh@20 -- # read -r var val 00:06:42.847 05:21:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:42.847 05:21:45 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:42.847 05:21:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.847 00:06:42.847 real 0m2.577s 00:06:42.847 user 0m2.388s 00:06:42.847 sys 0m0.197s 00:06:42.847 05:21:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:42.847 05:21:45 -- common/autotest_common.sh@10 -- # set +x 00:06:42.847 ************************************ 00:06:42.847 END TEST accel_decomp_mthread 00:06:42.847 ************************************ 00:06:42.847 05:21:45 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:42.847 05:21:45 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:42.847 05:21:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.847 05:21:45 -- common/autotest_common.sh@10 -- # set +x 00:06:42.847 ************************************ 00:06:42.847 START TEST accel_deomp_full_mthread 00:06:42.847 ************************************ 00:06:42.847 05:21:45 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:42.847 05:21:45 -- accel/accel.sh@16 -- # local accel_opc 00:06:42.847 05:21:45 -- accel/accel.sh@17 -- # local accel_module 00:06:42.847 05:21:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:42.847 05:21:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:42.847 05:21:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.847 05:21:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.847 05:21:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.847 05:21:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.847 05:21:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.847 05:21:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.847 05:21:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.847 05:21:45 -- accel/accel.sh@42 -- # jq -r . 00:06:42.847 [2024-12-07 05:21:45.794783] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:42.847 [2024-12-07 05:21:45.794889] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1625466 ] 00:06:42.847 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.847 [2024-12-07 05:21:45.860359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.847 [2024-12-07 05:21:45.926963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.227 05:21:47 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:44.227 00:06:44.227 SPDK Configuration: 00:06:44.227 Core mask: 0x1 00:06:44.227 00:06:44.227 Accel Perf Configuration: 00:06:44.227 Workload Type: decompress 00:06:44.227 Transfer size: 111250 bytes 00:06:44.227 Vector count 1 00:06:44.227 Module: software 00:06:44.227 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:44.227 Queue depth: 32 00:06:44.227 Allocate depth: 32 00:06:44.227 # threads/core: 2 00:06:44.227 Run time: 1 seconds 00:06:44.227 Verify: Yes 00:06:44.227 00:06:44.227 Running for 1 seconds... 00:06:44.227 00:06:44.227 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:44.227 ------------------------------------------------------------------------------------ 00:06:44.227 0,1 2080/s 85 MiB/s 0 0 00:06:44.227 0,0 2048/s 84 MiB/s 0 0 00:06:44.227 ==================================================================================== 00:06:44.227 Total 4128/s 437 MiB/s 0 0' 00:06:44.227 05:21:47 -- accel/accel.sh@20 -- # IFS=: 00:06:44.227 05:21:47 -- accel/accel.sh@20 -- # read -r var val 00:06:44.227 05:21:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:44.227 05:21:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:44.227 05:21:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.228 05:21:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.228 05:21:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.228 05:21:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.228 05:21:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.228 05:21:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.228 05:21:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.228 05:21:47 -- accel/accel.sh@42 -- # jq -r . 00:06:44.228 [2024-12-07 05:21:47.113247] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:44.228 [2024-12-07 05:21:47.113364] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1625705 ] 00:06:44.228 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.228 [2024-12-07 05:21:47.175557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.228 [2024-12-07 05:21:47.237752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.228 05:21:47 -- accel/accel.sh@21 -- # val= 00:06:44.228 05:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # IFS=: 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # read -r var val 00:06:44.228 05:21:47 -- accel/accel.sh@21 -- # val= 00:06:44.228 05:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # IFS=: 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # read -r var val 00:06:44.228 05:21:47 -- accel/accel.sh@21 -- # val= 00:06:44.228 05:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # IFS=: 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # read -r var val 00:06:44.228 05:21:47 -- accel/accel.sh@21 -- # val=0x1 00:06:44.228 05:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # IFS=: 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # read -r var val 00:06:44.228 05:21:47 -- accel/accel.sh@21 -- # val= 00:06:44.228 05:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # IFS=: 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # read -r var val 00:06:44.228 05:21:47 -- accel/accel.sh@21 -- # val= 00:06:44.228 05:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # IFS=: 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # read -r var val 00:06:44.228 05:21:47 -- accel/accel.sh@21 -- # val=decompress 00:06:44.228 05:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.228 05:21:47 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # IFS=: 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # read -r var val 00:06:44.228 05:21:47 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:44.228 05:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # IFS=: 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # read -r var val 00:06:44.228 05:21:47 -- accel/accel.sh@21 -- # val= 00:06:44.228 05:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # IFS=: 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # read -r var val 00:06:44.228 05:21:47 -- accel/accel.sh@21 -- # val=software 00:06:44.228 05:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.228 05:21:47 -- accel/accel.sh@23 -- # accel_module=software 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # IFS=: 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # read -r var val 00:06:44.228 05:21:47 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:44.228 05:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # IFS=: 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # read -r var val 00:06:44.228 05:21:47 -- accel/accel.sh@21 -- # val=32 00:06:44.228 05:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # IFS=: 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # read -r var val 00:06:44.228 05:21:47 -- accel/accel.sh@21 -- # val=32 00:06:44.228 05:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # IFS=: 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # read -r var val 00:06:44.228 05:21:47 -- accel/accel.sh@21 -- # val=2 00:06:44.228 05:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # IFS=: 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # read -r var val 00:06:44.228 05:21:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:44.228 05:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # IFS=: 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # read -r var val 00:06:44.228 05:21:47 -- accel/accel.sh@21 -- # val=Yes 00:06:44.228 05:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # IFS=: 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # read -r var val 00:06:44.228 05:21:47 -- accel/accel.sh@21 -- # val= 00:06:44.228 05:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # IFS=: 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # read -r var val 00:06:44.228 05:21:47 -- accel/accel.sh@21 -- # val= 00:06:44.228 05:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # IFS=: 00:06:44.228 05:21:47 -- accel/accel.sh@20 -- # read -r var val 00:06:45.169 05:21:48 -- accel/accel.sh@21 -- # val= 00:06:45.169 05:21:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.169 05:21:48 -- accel/accel.sh@20 -- # IFS=: 00:06:45.169 05:21:48 -- accel/accel.sh@20 -- # read -r var val 00:06:45.169 05:21:48 -- accel/accel.sh@21 -- # val= 00:06:45.169 05:21:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.169 05:21:48 -- accel/accel.sh@20 -- # IFS=: 00:06:45.169 05:21:48 -- accel/accel.sh@20 -- # read -r var val 00:06:45.169 05:21:48 -- accel/accel.sh@21 -- # val= 00:06:45.169 05:21:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.169 05:21:48 -- accel/accel.sh@20 -- # IFS=: 00:06:45.169 05:21:48 -- accel/accel.sh@20 -- # read -r var val 00:06:45.169 05:21:48 -- accel/accel.sh@21 -- # val= 00:06:45.169 05:21:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.169 05:21:48 -- accel/accel.sh@20 -- # IFS=: 00:06:45.169 05:21:48 -- accel/accel.sh@20 -- # read -r var val 00:06:45.169 05:21:48 -- accel/accel.sh@21 -- # val= 00:06:45.169 05:21:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.169 05:21:48 -- accel/accel.sh@20 -- # IFS=: 00:06:45.169 05:21:48 -- accel/accel.sh@20 -- # read -r var val 00:06:45.169 05:21:48 -- accel/accel.sh@21 -- # val= 00:06:45.169 05:21:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.169 05:21:48 -- accel/accel.sh@20 -- # IFS=: 00:06:45.169 05:21:48 -- accel/accel.sh@20 -- # read -r var val 00:06:45.169 05:21:48 -- accel/accel.sh@21 -- # val= 00:06:45.169 05:21:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.169 05:21:48 -- accel/accel.sh@20 -- # IFS=: 00:06:45.169 05:21:48 -- accel/accel.sh@20 -- # read -r var val 00:06:45.169 05:21:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:45.169 05:21:48 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:45.169 05:21:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.169 00:06:45.169 real 0m2.638s 00:06:45.169 user 0m2.447s 00:06:45.169 sys 0m0.198s 00:06:45.169 05:21:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:45.169 05:21:48 -- common/autotest_common.sh@10 -- # set +x 00:06:45.169 ************************************ 00:06:45.169 END TEST accel_deomp_full_mthread 00:06:45.169 ************************************ 00:06:45.430 05:21:48 -- accel/accel.sh@116 -- # [[ n == y ]] 00:06:45.430 05:21:48 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:45.430 05:21:48 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:45.430 05:21:48 -- accel/accel.sh@129 -- # build_accel_config 00:06:45.430 05:21:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.430 05:21:48 -- common/autotest_common.sh@10 -- # set +x 00:06:45.430 05:21:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.430 05:21:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.430 05:21:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.430 05:21:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.430 05:21:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.430 05:21:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.430 05:21:48 -- accel/accel.sh@42 -- # jq -r . 00:06:45.430 ************************************ 00:06:45.430 START TEST accel_dif_functional_tests 00:06:45.431 ************************************ 00:06:45.431 05:21:48 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:45.431 [2024-12-07 05:21:48.500118] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:45.431 [2024-12-07 05:21:48.500177] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1626057 ] 00:06:45.431 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.431 [2024-12-07 05:21:48.562750] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.431 [2024-12-07 05:21:48.627464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.431 [2024-12-07 05:21:48.627578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.431 [2024-12-07 05:21:48.627580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.692 00:06:45.692 00:06:45.692 CUnit - A unit testing framework for C - Version 2.1-3 00:06:45.692 http://cunit.sourceforge.net/ 00:06:45.692 00:06:45.692 00:06:45.692 Suite: accel_dif 00:06:45.692 Test: verify: DIF generated, GUARD check ...passed 00:06:45.692 Test: verify: DIF generated, APPTAG check ...passed 00:06:45.692 Test: verify: DIF generated, REFTAG check ...passed 00:06:45.692 Test: verify: DIF not generated, GUARD check ...[2024-12-07 05:21:48.683212] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:45.692 [2024-12-07 05:21:48.683252] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:45.692 passed 00:06:45.692 Test: verify: DIF not generated, APPTAG check ...[2024-12-07 05:21:48.683286] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:45.692 [2024-12-07 05:21:48.683302] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:45.692 passed 00:06:45.692 Test: verify: DIF not generated, REFTAG check ...[2024-12-07 05:21:48.683318] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:45.692 [2024-12-07 05:21:48.683332] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:45.692 passed 00:06:45.692 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:45.692 Test: verify: APPTAG incorrect, APPTAG check ...[2024-12-07 05:21:48.683375] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:45.692 passed 00:06:45.692 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:45.692 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:45.692 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:45.692 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-12-07 05:21:48.683492] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:45.692 passed 00:06:45.692 Test: generate copy: DIF generated, GUARD check ...passed 00:06:45.692 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:45.692 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:45.692 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:45.692 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:45.692 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:45.692 Test: generate copy: iovecs-len validate ...[2024-12-07 05:21:48.683682] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:45.692 passed 00:06:45.692 Test: generate copy: buffer alignment validate ...passed 00:06:45.692 00:06:45.692 Run Summary: Type Total Ran Passed Failed Inactive 00:06:45.692 suites 1 1 n/a 0 0 00:06:45.692 tests 20 20 20 0 0 00:06:45.692 asserts 204 204 204 0 n/a 00:06:45.692 00:06:45.692 Elapsed time = 0.000 seconds 00:06:45.692 00:06:45.692 real 0m0.351s 00:06:45.693 user 0m0.490s 00:06:45.693 sys 0m0.124s 00:06:45.693 05:21:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:45.693 05:21:48 -- common/autotest_common.sh@10 -- # set +x 00:06:45.693 ************************************ 00:06:45.693 END TEST accel_dif_functional_tests 00:06:45.693 ************************************ 00:06:45.693 00:06:45.693 real 0m55.024s 00:06:45.693 user 1m3.534s 00:06:45.693 sys 0m5.734s 00:06:45.693 05:21:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:45.693 05:21:48 -- common/autotest_common.sh@10 -- # set +x 00:06:45.693 ************************************ 00:06:45.693 END TEST accel 00:06:45.693 ************************************ 00:06:45.693 05:21:48 -- spdk/autotest.sh@177 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:45.693 05:21:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:45.693 05:21:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.693 05:21:48 -- common/autotest_common.sh@10 -- # set +x 00:06:45.693 ************************************ 00:06:45.693 START TEST accel_rpc 00:06:45.693 ************************************ 00:06:45.693 05:21:48 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:45.954 * Looking for test storage... 00:06:45.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:45.954 05:21:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:45.954 05:21:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:45.954 05:21:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:45.954 05:21:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:45.954 05:21:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:45.954 05:21:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:45.954 05:21:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:45.954 05:21:49 -- scripts/common.sh@335 -- # IFS=.-: 00:06:45.954 05:21:49 -- scripts/common.sh@335 -- # read -ra ver1 00:06:45.954 05:21:49 -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.954 05:21:49 -- scripts/common.sh@336 -- # read -ra ver2 00:06:45.954 05:21:49 -- scripts/common.sh@337 -- # local 'op=<' 00:06:45.954 05:21:49 -- scripts/common.sh@339 -- # ver1_l=2 00:06:45.954 05:21:49 -- scripts/common.sh@340 -- # ver2_l=1 00:06:45.954 05:21:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:45.954 05:21:49 -- scripts/common.sh@343 -- # case "$op" in 00:06:45.954 05:21:49 -- scripts/common.sh@344 -- # : 1 00:06:45.954 05:21:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:45.954 05:21:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.954 05:21:49 -- scripts/common.sh@364 -- # decimal 1 00:06:45.954 05:21:49 -- scripts/common.sh@352 -- # local d=1 00:06:45.954 05:21:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.954 05:21:49 -- scripts/common.sh@354 -- # echo 1 00:06:45.954 05:21:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:45.954 05:21:49 -- scripts/common.sh@365 -- # decimal 2 00:06:45.954 05:21:49 -- scripts/common.sh@352 -- # local d=2 00:06:45.954 05:21:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.954 05:21:49 -- scripts/common.sh@354 -- # echo 2 00:06:45.954 05:21:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:45.954 05:21:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:45.954 05:21:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:45.954 05:21:49 -- scripts/common.sh@367 -- # return 0 00:06:45.954 05:21:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.954 05:21:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:45.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.954 --rc genhtml_branch_coverage=1 00:06:45.954 --rc genhtml_function_coverage=1 00:06:45.954 --rc genhtml_legend=1 00:06:45.954 --rc geninfo_all_blocks=1 00:06:45.954 --rc geninfo_unexecuted_blocks=1 00:06:45.954 00:06:45.954 ' 00:06:45.954 05:21:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:45.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.954 --rc genhtml_branch_coverage=1 00:06:45.954 --rc genhtml_function_coverage=1 00:06:45.954 --rc genhtml_legend=1 00:06:45.954 --rc geninfo_all_blocks=1 00:06:45.954 --rc geninfo_unexecuted_blocks=1 00:06:45.954 00:06:45.954 ' 00:06:45.954 05:21:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:45.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.954 --rc genhtml_branch_coverage=1 00:06:45.954 --rc genhtml_function_coverage=1 00:06:45.954 --rc genhtml_legend=1 00:06:45.954 --rc geninfo_all_blocks=1 00:06:45.954 --rc geninfo_unexecuted_blocks=1 00:06:45.954 00:06:45.954 ' 00:06:45.954 05:21:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:45.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.954 --rc genhtml_branch_coverage=1 00:06:45.954 --rc genhtml_function_coverage=1 00:06:45.954 --rc genhtml_legend=1 00:06:45.954 --rc geninfo_all_blocks=1 00:06:45.954 --rc geninfo_unexecuted_blocks=1 00:06:45.954 00:06:45.954 ' 00:06:45.954 05:21:49 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:45.954 05:21:49 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:45.954 05:21:49 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1626170 00:06:45.954 05:21:49 -- accel/accel_rpc.sh@15 -- # waitforlisten 1626170 00:06:45.954 05:21:49 -- common/autotest_common.sh@829 -- # '[' -z 1626170 ']' 00:06:45.954 05:21:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.954 05:21:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.954 05:21:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.954 05:21:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.955 05:21:49 -- common/autotest_common.sh@10 -- # set +x 00:06:45.955 [2024-12-07 05:21:49.099074] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:45.955 [2024-12-07 05:21:49.099142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1626170 ] 00:06:45.955 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.955 [2024-12-07 05:21:49.155870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.215 [2024-12-07 05:21:49.221736] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:46.215 [2024-12-07 05:21:49.221860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.790 05:21:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.790 05:21:49 -- common/autotest_common.sh@862 -- # return 0 00:06:46.790 05:21:49 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:46.790 05:21:49 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:46.790 05:21:49 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:46.790 05:21:49 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:46.790 05:21:49 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:46.790 05:21:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:46.790 05:21:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:46.790 05:21:49 -- common/autotest_common.sh@10 -- # set +x 00:06:46.790 ************************************ 00:06:46.790 START TEST accel_assign_opcode 00:06:46.790 ************************************ 00:06:46.790 05:21:49 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:06:46.790 05:21:49 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:46.790 05:21:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.790 05:21:49 -- common/autotest_common.sh@10 -- # set +x 00:06:46.790 [2024-12-07 05:21:49.879751] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:46.790 05:21:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.790 05:21:49 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:46.790 05:21:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.790 05:21:49 -- common/autotest_common.sh@10 -- # set +x 00:06:46.790 [2024-12-07 05:21:49.887765] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:46.790 05:21:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.790 05:21:49 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:46.790 05:21:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.790 05:21:49 -- common/autotest_common.sh@10 -- # set +x 00:06:47.052 05:21:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.052 05:21:50 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:47.052 05:21:50 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:47.052 05:21:50 -- accel/accel_rpc.sh@42 -- # grep software 00:06:47.052 05:21:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.052 05:21:50 -- common/autotest_common.sh@10 -- # set +x 00:06:47.052 05:21:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.052 software 00:06:47.052 00:06:47.052 real 0m0.199s 00:06:47.052 user 0m0.043s 00:06:47.052 sys 0m0.010s 00:06:47.052 05:21:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:47.052 05:21:50 -- common/autotest_common.sh@10 -- # set +x 00:06:47.052 ************************************ 00:06:47.052 END TEST accel_assign_opcode 00:06:47.052 ************************************ 00:06:47.052 05:21:50 -- accel/accel_rpc.sh@55 -- # killprocess 1626170 00:06:47.052 05:21:50 -- common/autotest_common.sh@936 -- # '[' -z 1626170 ']' 00:06:47.052 05:21:50 -- common/autotest_common.sh@940 -- # kill -0 1626170 00:06:47.052 05:21:50 -- common/autotest_common.sh@941 -- # uname 00:06:47.052 05:21:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:47.052 05:21:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1626170 00:06:47.052 05:21:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:47.052 05:21:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:47.052 05:21:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1626170' 00:06:47.052 killing process with pid 1626170 00:06:47.052 05:21:50 -- common/autotest_common.sh@955 -- # kill 1626170 00:06:47.052 05:21:50 -- common/autotest_common.sh@960 -- # wait 1626170 00:06:47.314 00:06:47.314 real 0m1.503s 00:06:47.314 user 0m1.567s 00:06:47.314 sys 0m0.393s 00:06:47.314 05:21:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:47.314 05:21:50 -- common/autotest_common.sh@10 -- # set +x 00:06:47.314 ************************************ 00:06:47.314 END TEST accel_rpc 00:06:47.314 ************************************ 00:06:47.314 05:21:50 -- spdk/autotest.sh@178 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:47.314 05:21:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:47.314 05:21:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.314 05:21:50 -- common/autotest_common.sh@10 -- # set +x 00:06:47.314 ************************************ 00:06:47.314 START TEST app_cmdline 00:06:47.314 ************************************ 00:06:47.314 05:21:50 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:47.314 * Looking for test storage... 00:06:47.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:47.314 05:21:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:47.314 05:21:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:47.314 05:21:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:47.575 05:21:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:47.575 05:21:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:47.575 05:21:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:47.575 05:21:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:47.575 05:21:50 -- scripts/common.sh@335 -- # IFS=.-: 00:06:47.575 05:21:50 -- scripts/common.sh@335 -- # read -ra ver1 00:06:47.575 05:21:50 -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.575 05:21:50 -- scripts/common.sh@336 -- # read -ra ver2 00:06:47.575 05:21:50 -- scripts/common.sh@337 -- # local 'op=<' 00:06:47.575 05:21:50 -- scripts/common.sh@339 -- # ver1_l=2 00:06:47.575 05:21:50 -- scripts/common.sh@340 -- # ver2_l=1 00:06:47.575 05:21:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:47.575 05:21:50 -- scripts/common.sh@343 -- # case "$op" in 00:06:47.575 05:21:50 -- scripts/common.sh@344 -- # : 1 00:06:47.575 05:21:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:47.575 05:21:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.575 05:21:50 -- scripts/common.sh@364 -- # decimal 1 00:06:47.575 05:21:50 -- scripts/common.sh@352 -- # local d=1 00:06:47.575 05:21:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.575 05:21:50 -- scripts/common.sh@354 -- # echo 1 00:06:47.575 05:21:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:47.575 05:21:50 -- scripts/common.sh@365 -- # decimal 2 00:06:47.575 05:21:50 -- scripts/common.sh@352 -- # local d=2 00:06:47.575 05:21:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.575 05:21:50 -- scripts/common.sh@354 -- # echo 2 00:06:47.575 05:21:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:47.575 05:21:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:47.575 05:21:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:47.575 05:21:50 -- scripts/common.sh@367 -- # return 0 00:06:47.575 05:21:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.575 05:21:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:47.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.575 --rc genhtml_branch_coverage=1 00:06:47.575 --rc genhtml_function_coverage=1 00:06:47.575 --rc genhtml_legend=1 00:06:47.575 --rc geninfo_all_blocks=1 00:06:47.575 --rc geninfo_unexecuted_blocks=1 00:06:47.575 00:06:47.575 ' 00:06:47.575 05:21:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:47.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.575 --rc genhtml_branch_coverage=1 00:06:47.575 --rc genhtml_function_coverage=1 00:06:47.575 --rc genhtml_legend=1 00:06:47.575 --rc geninfo_all_blocks=1 00:06:47.575 --rc geninfo_unexecuted_blocks=1 00:06:47.575 00:06:47.575 ' 00:06:47.575 05:21:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:47.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.575 --rc genhtml_branch_coverage=1 00:06:47.575 --rc genhtml_function_coverage=1 00:06:47.575 --rc genhtml_legend=1 00:06:47.575 --rc geninfo_all_blocks=1 00:06:47.575 --rc geninfo_unexecuted_blocks=1 00:06:47.575 00:06:47.575 ' 00:06:47.575 05:21:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:47.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.575 --rc genhtml_branch_coverage=1 00:06:47.575 --rc genhtml_function_coverage=1 00:06:47.575 --rc genhtml_legend=1 00:06:47.575 --rc geninfo_all_blocks=1 00:06:47.575 --rc geninfo_unexecuted_blocks=1 00:06:47.575 00:06:47.575 ' 00:06:47.575 05:21:50 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:47.575 05:21:50 -- app/cmdline.sh@17 -- # spdk_tgt_pid=1626551 00:06:47.575 05:21:50 -- app/cmdline.sh@18 -- # waitforlisten 1626551 00:06:47.575 05:21:50 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:47.575 05:21:50 -- common/autotest_common.sh@829 -- # '[' -z 1626551 ']' 00:06:47.575 05:21:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.575 05:21:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.575 05:21:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.575 05:21:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.575 05:21:50 -- common/autotest_common.sh@10 -- # set +x 00:06:47.575 [2024-12-07 05:21:50.672592] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:47.575 [2024-12-07 05:21:50.672673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1626551 ] 00:06:47.575 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.575 [2024-12-07 05:21:50.739121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.575 [2024-12-07 05:21:50.813555] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:47.836 [2024-12-07 05:21:50.813705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.409 05:21:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.409 05:21:51 -- common/autotest_common.sh@862 -- # return 0 00:06:48.409 05:21:51 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:48.409 { 00:06:48.409 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:06:48.409 "fields": { 00:06:48.409 "major": 24, 00:06:48.409 "minor": 1, 00:06:48.409 "patch": 1, 00:06:48.409 "suffix": "-pre", 00:06:48.409 "commit": "c13c99a5e" 00:06:48.409 } 00:06:48.409 } 00:06:48.409 05:21:51 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:48.409 05:21:51 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:48.409 05:21:51 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:48.409 05:21:51 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:48.409 05:21:51 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:48.409 05:21:51 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:48.409 05:21:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.409 05:21:51 -- app/cmdline.sh@26 -- # sort 00:06:48.409 05:21:51 -- common/autotest_common.sh@10 -- # set +x 00:06:48.409 05:21:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.409 05:21:51 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:48.409 05:21:51 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:48.409 05:21:51 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:48.671 05:21:51 -- common/autotest_common.sh@650 -- # local es=0 00:06:48.671 05:21:51 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:48.671 05:21:51 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:48.671 05:21:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.671 05:21:51 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:48.671 05:21:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.671 05:21:51 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:48.671 05:21:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.671 05:21:51 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:48.671 05:21:51 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:48.671 05:21:51 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:48.671 request: 00:06:48.671 { 00:06:48.671 "method": "env_dpdk_get_mem_stats", 00:06:48.671 "req_id": 1 00:06:48.671 } 00:06:48.671 Got JSON-RPC error response 00:06:48.671 response: 00:06:48.671 { 00:06:48.671 "code": -32601, 00:06:48.671 "message": "Method not found" 00:06:48.671 } 00:06:48.671 05:21:51 -- common/autotest_common.sh@653 -- # es=1 00:06:48.671 05:21:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:48.671 05:21:51 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:48.671 05:21:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:48.671 05:21:51 -- app/cmdline.sh@1 -- # killprocess 1626551 00:06:48.671 05:21:51 -- common/autotest_common.sh@936 -- # '[' -z 1626551 ']' 00:06:48.671 05:21:51 -- common/autotest_common.sh@940 -- # kill -0 1626551 00:06:48.671 05:21:51 -- common/autotest_common.sh@941 -- # uname 00:06:48.671 05:21:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:48.671 05:21:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1626551 00:06:48.671 05:21:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:48.671 05:21:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:48.671 05:21:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1626551' 00:06:48.671 killing process with pid 1626551 00:06:48.671 05:21:51 -- common/autotest_common.sh@955 -- # kill 1626551 00:06:48.672 05:21:51 -- common/autotest_common.sh@960 -- # wait 1626551 00:06:48.933 00:06:48.933 real 0m1.668s 00:06:48.933 user 0m1.973s 00:06:48.933 sys 0m0.443s 00:06:48.933 05:21:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:48.933 05:21:52 -- common/autotest_common.sh@10 -- # set +x 00:06:48.933 ************************************ 00:06:48.933 END TEST app_cmdline 00:06:48.933 ************************************ 00:06:48.933 05:21:52 -- spdk/autotest.sh@179 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:48.933 05:21:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:48.933 05:21:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.933 05:21:52 -- common/autotest_common.sh@10 -- # set +x 00:06:48.933 ************************************ 00:06:48.933 START TEST version 00:06:48.933 ************************************ 00:06:48.933 05:21:52 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:49.194 * Looking for test storage... 00:06:49.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:49.194 05:21:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:49.194 05:21:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:49.194 05:21:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:49.194 05:21:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:49.194 05:21:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:49.194 05:21:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:49.194 05:21:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:49.194 05:21:52 -- scripts/common.sh@335 -- # IFS=.-: 00:06:49.194 05:21:52 -- scripts/common.sh@335 -- # read -ra ver1 00:06:49.194 05:21:52 -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.195 05:21:52 -- scripts/common.sh@336 -- # read -ra ver2 00:06:49.195 05:21:52 -- scripts/common.sh@337 -- # local 'op=<' 00:06:49.195 05:21:52 -- scripts/common.sh@339 -- # ver1_l=2 00:06:49.195 05:21:52 -- scripts/common.sh@340 -- # ver2_l=1 00:06:49.195 05:21:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:49.195 05:21:52 -- scripts/common.sh@343 -- # case "$op" in 00:06:49.195 05:21:52 -- scripts/common.sh@344 -- # : 1 00:06:49.195 05:21:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:49.195 05:21:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.195 05:21:52 -- scripts/common.sh@364 -- # decimal 1 00:06:49.195 05:21:52 -- scripts/common.sh@352 -- # local d=1 00:06:49.195 05:21:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.195 05:21:52 -- scripts/common.sh@354 -- # echo 1 00:06:49.195 05:21:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:49.195 05:21:52 -- scripts/common.sh@365 -- # decimal 2 00:06:49.195 05:21:52 -- scripts/common.sh@352 -- # local d=2 00:06:49.195 05:21:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.195 05:21:52 -- scripts/common.sh@354 -- # echo 2 00:06:49.195 05:21:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:49.195 05:21:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:49.195 05:21:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:49.195 05:21:52 -- scripts/common.sh@367 -- # return 0 00:06:49.195 05:21:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.195 05:21:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:49.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.195 --rc genhtml_branch_coverage=1 00:06:49.195 --rc genhtml_function_coverage=1 00:06:49.195 --rc genhtml_legend=1 00:06:49.195 --rc geninfo_all_blocks=1 00:06:49.195 --rc geninfo_unexecuted_blocks=1 00:06:49.195 00:06:49.195 ' 00:06:49.195 05:21:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:49.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.195 --rc genhtml_branch_coverage=1 00:06:49.195 --rc genhtml_function_coverage=1 00:06:49.195 --rc genhtml_legend=1 00:06:49.195 --rc geninfo_all_blocks=1 00:06:49.195 --rc geninfo_unexecuted_blocks=1 00:06:49.195 00:06:49.195 ' 00:06:49.195 05:21:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:49.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.195 --rc genhtml_branch_coverage=1 00:06:49.195 --rc genhtml_function_coverage=1 00:06:49.195 --rc genhtml_legend=1 00:06:49.195 --rc geninfo_all_blocks=1 00:06:49.195 --rc geninfo_unexecuted_blocks=1 00:06:49.195 00:06:49.195 ' 00:06:49.195 05:21:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:49.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.195 --rc genhtml_branch_coverage=1 00:06:49.195 --rc genhtml_function_coverage=1 00:06:49.195 --rc genhtml_legend=1 00:06:49.195 --rc geninfo_all_blocks=1 00:06:49.195 --rc geninfo_unexecuted_blocks=1 00:06:49.195 00:06:49.195 ' 00:06:49.195 05:21:52 -- app/version.sh@17 -- # get_header_version major 00:06:49.195 05:21:52 -- app/version.sh@14 -- # cut -f2 00:06:49.195 05:21:52 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:49.195 05:21:52 -- app/version.sh@14 -- # tr -d '"' 00:06:49.195 05:21:52 -- app/version.sh@17 -- # major=24 00:06:49.195 05:21:52 -- app/version.sh@18 -- # get_header_version minor 00:06:49.195 05:21:52 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:49.195 05:21:52 -- app/version.sh@14 -- # cut -f2 00:06:49.195 05:21:52 -- app/version.sh@14 -- # tr -d '"' 00:06:49.195 05:21:52 -- app/version.sh@18 -- # minor=1 00:06:49.195 05:21:52 -- app/version.sh@19 -- # get_header_version patch 00:06:49.195 05:21:52 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:49.195 05:21:52 -- app/version.sh@14 -- # cut -f2 00:06:49.195 05:21:52 -- app/version.sh@14 -- # tr -d '"' 00:06:49.195 05:21:52 -- app/version.sh@19 -- # patch=1 00:06:49.195 05:21:52 -- app/version.sh@20 -- # get_header_version suffix 00:06:49.195 05:21:52 -- app/version.sh@14 -- # cut -f2 00:06:49.195 05:21:52 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:49.195 05:21:52 -- app/version.sh@14 -- # tr -d '"' 00:06:49.195 05:21:52 -- app/version.sh@20 -- # suffix=-pre 00:06:49.195 05:21:52 -- app/version.sh@22 -- # version=24.1 00:06:49.195 05:21:52 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:49.195 05:21:52 -- app/version.sh@25 -- # version=24.1.1 00:06:49.195 05:21:52 -- app/version.sh@28 -- # version=24.1.1rc0 00:06:49.195 05:21:52 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:49.195 05:21:52 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:49.195 05:21:52 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:06:49.195 05:21:52 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:06:49.195 00:06:49.195 real 0m0.255s 00:06:49.195 user 0m0.155s 00:06:49.195 sys 0m0.137s 00:06:49.195 05:21:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:49.195 05:21:52 -- common/autotest_common.sh@10 -- # set +x 00:06:49.195 ************************************ 00:06:49.195 END TEST version 00:06:49.195 ************************************ 00:06:49.456 05:21:52 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:06:49.456 05:21:52 -- spdk/autotest.sh@191 -- # uname -s 00:06:49.456 05:21:52 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:06:49.456 05:21:52 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:06:49.456 05:21:52 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:06:49.456 05:21:52 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:06:49.456 05:21:52 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:06:49.456 05:21:52 -- spdk/autotest.sh@255 -- # timing_exit lib 00:06:49.456 05:21:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:49.456 05:21:52 -- common/autotest_common.sh@10 -- # set +x 00:06:49.456 05:21:52 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:06:49.456 05:21:52 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:06:49.456 05:21:52 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:06:49.456 05:21:52 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:06:49.456 05:21:52 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:06:49.456 05:21:52 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:06:49.456 05:21:52 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:49.456 05:21:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:49.456 05:21:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.456 05:21:52 -- common/autotest_common.sh@10 -- # set +x 00:06:49.456 ************************************ 00:06:49.456 START TEST nvmf_tcp 00:06:49.456 ************************************ 00:06:49.456 05:21:52 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:49.456 * Looking for test storage... 00:06:49.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:49.456 05:21:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:49.456 05:21:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:49.456 05:21:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:49.456 05:21:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:49.456 05:21:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:49.457 05:21:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:49.457 05:21:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:49.457 05:21:52 -- scripts/common.sh@335 -- # IFS=.-: 00:06:49.457 05:21:52 -- scripts/common.sh@335 -- # read -ra ver1 00:06:49.457 05:21:52 -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.457 05:21:52 -- scripts/common.sh@336 -- # read -ra ver2 00:06:49.457 05:21:52 -- scripts/common.sh@337 -- # local 'op=<' 00:06:49.457 05:21:52 -- scripts/common.sh@339 -- # ver1_l=2 00:06:49.457 05:21:52 -- scripts/common.sh@340 -- # ver2_l=1 00:06:49.457 05:21:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:49.457 05:21:52 -- scripts/common.sh@343 -- # case "$op" in 00:06:49.457 05:21:52 -- scripts/common.sh@344 -- # : 1 00:06:49.457 05:21:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:49.457 05:21:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.457 05:21:52 -- scripts/common.sh@364 -- # decimal 1 00:06:49.457 05:21:52 -- scripts/common.sh@352 -- # local d=1 00:06:49.457 05:21:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.457 05:21:52 -- scripts/common.sh@354 -- # echo 1 00:06:49.457 05:21:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:49.457 05:21:52 -- scripts/common.sh@365 -- # decimal 2 00:06:49.457 05:21:52 -- scripts/common.sh@352 -- # local d=2 00:06:49.457 05:21:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.457 05:21:52 -- scripts/common.sh@354 -- # echo 2 00:06:49.457 05:21:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:49.457 05:21:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:49.457 05:21:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:49.457 05:21:52 -- scripts/common.sh@367 -- # return 0 00:06:49.457 05:21:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.457 05:21:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:49.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.457 --rc genhtml_branch_coverage=1 00:06:49.457 --rc genhtml_function_coverage=1 00:06:49.457 --rc genhtml_legend=1 00:06:49.457 --rc geninfo_all_blocks=1 00:06:49.457 --rc geninfo_unexecuted_blocks=1 00:06:49.457 00:06:49.457 ' 00:06:49.457 05:21:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:49.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.457 --rc genhtml_branch_coverage=1 00:06:49.457 --rc genhtml_function_coverage=1 00:06:49.457 --rc genhtml_legend=1 00:06:49.457 --rc geninfo_all_blocks=1 00:06:49.457 --rc geninfo_unexecuted_blocks=1 00:06:49.457 00:06:49.457 ' 00:06:49.457 05:21:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:49.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.457 --rc genhtml_branch_coverage=1 00:06:49.457 --rc genhtml_function_coverage=1 00:06:49.457 --rc genhtml_legend=1 00:06:49.457 --rc geninfo_all_blocks=1 00:06:49.457 --rc geninfo_unexecuted_blocks=1 00:06:49.457 00:06:49.457 ' 00:06:49.457 05:21:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:49.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.457 --rc genhtml_branch_coverage=1 00:06:49.457 --rc genhtml_function_coverage=1 00:06:49.457 --rc genhtml_legend=1 00:06:49.457 --rc geninfo_all_blocks=1 00:06:49.457 --rc geninfo_unexecuted_blocks=1 00:06:49.457 00:06:49.457 ' 00:06:49.457 05:21:52 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:49.457 05:21:52 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:49.457 05:21:52 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:49.720 05:21:52 -- nvmf/common.sh@7 -- # uname -s 00:06:49.720 05:21:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:49.720 05:21:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:49.720 05:21:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:49.720 05:21:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:49.720 05:21:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:49.720 05:21:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:49.720 05:21:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:49.720 05:21:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:49.720 05:21:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:49.720 05:21:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:49.720 05:21:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:49.720 05:21:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:49.720 05:21:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:49.720 05:21:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:49.720 05:21:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:49.720 05:21:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:49.720 05:21:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.720 05:21:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.720 05:21:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.720 05:21:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.720 05:21:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.720 05:21:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.720 05:21:52 -- paths/export.sh@5 -- # export PATH 00:06:49.720 05:21:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.720 05:21:52 -- nvmf/common.sh@46 -- # : 0 00:06:49.720 05:21:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:49.720 05:21:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:49.720 05:21:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:49.720 05:21:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:49.720 05:21:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:49.720 05:21:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:49.720 05:21:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:49.720 05:21:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:49.720 05:21:52 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:49.720 05:21:52 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:49.720 05:21:52 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:49.720 05:21:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:49.720 05:21:52 -- common/autotest_common.sh@10 -- # set +x 00:06:49.720 05:21:52 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:49.720 05:21:52 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:49.720 05:21:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:49.720 05:21:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.720 05:21:52 -- common/autotest_common.sh@10 -- # set +x 00:06:49.720 ************************************ 00:06:49.720 START TEST nvmf_example 00:06:49.720 ************************************ 00:06:49.720 05:21:52 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:49.720 * Looking for test storage... 00:06:49.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:49.720 05:21:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:49.720 05:21:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:49.720 05:21:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:49.720 05:21:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:49.720 05:21:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:49.720 05:21:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:49.720 05:21:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:49.720 05:21:52 -- scripts/common.sh@335 -- # IFS=.-: 00:06:49.720 05:21:52 -- scripts/common.sh@335 -- # read -ra ver1 00:06:49.720 05:21:52 -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.720 05:21:52 -- scripts/common.sh@336 -- # read -ra ver2 00:06:49.720 05:21:52 -- scripts/common.sh@337 -- # local 'op=<' 00:06:49.720 05:21:52 -- scripts/common.sh@339 -- # ver1_l=2 00:06:49.720 05:21:52 -- scripts/common.sh@340 -- # ver2_l=1 00:06:49.720 05:21:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:49.720 05:21:52 -- scripts/common.sh@343 -- # case "$op" in 00:06:49.720 05:21:52 -- scripts/common.sh@344 -- # : 1 00:06:49.720 05:21:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:49.720 05:21:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.720 05:21:52 -- scripts/common.sh@364 -- # decimal 1 00:06:49.720 05:21:52 -- scripts/common.sh@352 -- # local d=1 00:06:49.720 05:21:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.721 05:21:52 -- scripts/common.sh@354 -- # echo 1 00:06:49.721 05:21:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:49.721 05:21:52 -- scripts/common.sh@365 -- # decimal 2 00:06:49.721 05:21:52 -- scripts/common.sh@352 -- # local d=2 00:06:49.721 05:21:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.721 05:21:52 -- scripts/common.sh@354 -- # echo 2 00:06:49.721 05:21:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:49.721 05:21:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:49.721 05:21:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:49.721 05:21:52 -- scripts/common.sh@367 -- # return 0 00:06:49.721 05:21:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.721 05:21:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:49.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.721 --rc genhtml_branch_coverage=1 00:06:49.721 --rc genhtml_function_coverage=1 00:06:49.721 --rc genhtml_legend=1 00:06:49.721 --rc geninfo_all_blocks=1 00:06:49.721 --rc geninfo_unexecuted_blocks=1 00:06:49.721 00:06:49.721 ' 00:06:49.721 05:21:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:49.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.721 --rc genhtml_branch_coverage=1 00:06:49.721 --rc genhtml_function_coverage=1 00:06:49.721 --rc genhtml_legend=1 00:06:49.721 --rc geninfo_all_blocks=1 00:06:49.721 --rc geninfo_unexecuted_blocks=1 00:06:49.721 00:06:49.721 ' 00:06:49.721 05:21:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:49.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.721 --rc genhtml_branch_coverage=1 00:06:49.721 --rc genhtml_function_coverage=1 00:06:49.721 --rc genhtml_legend=1 00:06:49.721 --rc geninfo_all_blocks=1 00:06:49.721 --rc geninfo_unexecuted_blocks=1 00:06:49.721 00:06:49.721 ' 00:06:49.721 05:21:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:49.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.721 --rc genhtml_branch_coverage=1 00:06:49.721 --rc genhtml_function_coverage=1 00:06:49.721 --rc genhtml_legend=1 00:06:49.721 --rc geninfo_all_blocks=1 00:06:49.721 --rc geninfo_unexecuted_blocks=1 00:06:49.721 00:06:49.721 ' 00:06:49.721 05:21:52 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:49.721 05:21:52 -- nvmf/common.sh@7 -- # uname -s 00:06:49.721 05:21:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:49.721 05:21:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:49.721 05:21:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:49.721 05:21:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:49.721 05:21:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:49.721 05:21:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:49.721 05:21:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:49.721 05:21:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:49.721 05:21:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:49.721 05:21:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:49.721 05:21:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:49.721 05:21:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:49.721 05:21:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:49.721 05:21:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:49.721 05:21:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:49.721 05:21:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:49.983 05:21:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.983 05:21:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.983 05:21:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.983 05:21:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.983 05:21:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.983 05:21:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.983 05:21:52 -- paths/export.sh@5 -- # export PATH 00:06:49.983 05:21:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.983 05:21:52 -- nvmf/common.sh@46 -- # : 0 00:06:49.984 05:21:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:49.984 05:21:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:49.984 05:21:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:49.984 05:21:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:49.984 05:21:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:49.984 05:21:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:49.984 05:21:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:49.984 05:21:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:49.984 05:21:52 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:49.984 05:21:52 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:49.984 05:21:52 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:49.984 05:21:52 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:49.984 05:21:52 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:49.984 05:21:52 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:49.984 05:21:52 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:49.984 05:21:52 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:49.984 05:21:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:49.984 05:21:52 -- common/autotest_common.sh@10 -- # set +x 00:06:49.984 05:21:52 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:49.984 05:21:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:49.984 05:21:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:49.984 05:21:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:49.984 05:21:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:49.984 05:21:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:49.984 05:21:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:49.984 05:21:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:49.984 05:21:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:49.984 05:21:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:06:49.984 05:21:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:06:49.984 05:21:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:06:49.984 05:21:52 -- common/autotest_common.sh@10 -- # set +x 00:06:58.223 05:22:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:58.223 05:22:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:06:58.223 05:22:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:06:58.223 05:22:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:06:58.223 05:22:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:06:58.223 05:22:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:06:58.223 05:22:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:06:58.223 05:22:00 -- nvmf/common.sh@294 -- # net_devs=() 00:06:58.223 05:22:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:06:58.223 05:22:00 -- nvmf/common.sh@295 -- # e810=() 00:06:58.223 05:22:00 -- nvmf/common.sh@295 -- # local -ga e810 00:06:58.223 05:22:00 -- nvmf/common.sh@296 -- # x722=() 00:06:58.223 05:22:00 -- nvmf/common.sh@296 -- # local -ga x722 00:06:58.223 05:22:00 -- nvmf/common.sh@297 -- # mlx=() 00:06:58.223 05:22:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:06:58.223 05:22:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:58.223 05:22:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:58.223 05:22:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:58.223 05:22:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:58.223 05:22:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:58.223 05:22:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:58.223 05:22:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:58.223 05:22:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:58.223 05:22:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:58.223 05:22:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:58.223 05:22:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:58.223 05:22:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:06:58.223 05:22:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:06:58.223 05:22:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:06:58.223 05:22:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:06:58.223 05:22:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:06:58.223 05:22:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:06:58.223 05:22:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:58.223 05:22:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:58.223 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:58.223 05:22:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:58.223 05:22:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:58.223 05:22:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.223 05:22:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.223 05:22:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:58.223 05:22:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:58.223 05:22:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:58.223 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:58.223 05:22:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:58.223 05:22:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:58.223 05:22:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.223 05:22:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.223 05:22:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:58.223 05:22:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:06:58.223 05:22:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:06:58.223 05:22:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:06:58.223 05:22:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:58.223 05:22:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.223 05:22:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:58.223 05:22:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.223 05:22:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:58.223 Found net devices under 0000:31:00.0: cvl_0_0 00:06:58.223 05:22:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.223 05:22:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:58.223 05:22:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.223 05:22:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:58.223 05:22:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.223 05:22:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:58.223 Found net devices under 0000:31:00.1: cvl_0_1 00:06:58.223 05:22:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.223 05:22:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:06:58.223 05:22:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:06:58.223 05:22:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:06:58.223 05:22:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:06:58.223 05:22:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:06:58.223 05:22:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:58.223 05:22:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:58.223 05:22:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:58.223 05:22:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:06:58.223 05:22:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:58.224 05:22:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:58.224 05:22:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:06:58.224 05:22:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:58.224 05:22:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:58.224 05:22:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:06:58.224 05:22:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:06:58.224 05:22:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:06:58.224 05:22:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:58.224 05:22:00 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:58.224 05:22:00 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:58.224 05:22:00 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:06:58.224 05:22:00 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:58.224 05:22:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:58.224 05:22:00 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:58.224 05:22:00 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:06:58.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:58.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:06:58.224 00:06:58.224 --- 10.0.0.2 ping statistics --- 00:06:58.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.224 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:06:58.224 05:22:00 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:58.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:58.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:06:58.224 00:06:58.224 --- 10.0.0.1 ping statistics --- 00:06:58.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.224 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:06:58.224 05:22:00 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:58.224 05:22:00 -- nvmf/common.sh@410 -- # return 0 00:06:58.224 05:22:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:58.224 05:22:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:58.224 05:22:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:58.224 05:22:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:58.224 05:22:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:58.224 05:22:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:58.224 05:22:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:58.224 05:22:00 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:58.224 05:22:00 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:58.224 05:22:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:58.224 05:22:00 -- common/autotest_common.sh@10 -- # set +x 00:06:58.224 05:22:00 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:58.224 05:22:00 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:58.224 05:22:00 -- target/nvmf_example.sh@34 -- # nvmfpid=1631076 00:06:58.224 05:22:00 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:58.224 05:22:00 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:58.224 05:22:00 -- target/nvmf_example.sh@36 -- # waitforlisten 1631076 00:06:58.224 05:22:00 -- common/autotest_common.sh@829 -- # '[' -z 1631076 ']' 00:06:58.224 05:22:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.224 05:22:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.224 05:22:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.224 05:22:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.224 05:22:00 -- common/autotest_common.sh@10 -- # set +x 00:06:58.224 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.224 05:22:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.224 05:22:01 -- common/autotest_common.sh@862 -- # return 0 00:06:58.224 05:22:01 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:58.224 05:22:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:58.224 05:22:01 -- common/autotest_common.sh@10 -- # set +x 00:06:58.224 05:22:01 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:58.224 05:22:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.224 05:22:01 -- common/autotest_common.sh@10 -- # set +x 00:06:58.224 05:22:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.224 05:22:01 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:58.224 05:22:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.224 05:22:01 -- common/autotest_common.sh@10 -- # set +x 00:06:58.224 05:22:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.224 05:22:01 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:58.224 05:22:01 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:58.224 05:22:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.224 05:22:01 -- common/autotest_common.sh@10 -- # set +x 00:06:58.484 05:22:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.484 05:22:01 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:58.484 05:22:01 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:58.484 05:22:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.484 05:22:01 -- common/autotest_common.sh@10 -- # set +x 00:06:58.484 05:22:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.484 05:22:01 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:58.484 05:22:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.484 05:22:01 -- common/autotest_common.sh@10 -- # set +x 00:06:58.484 05:22:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.484 05:22:01 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:58.484 05:22:01 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:58.484 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.480 Initializing NVMe Controllers 00:07:08.480 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:08.480 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:08.480 Initialization complete. Launching workers. 00:07:08.480 ======================================================== 00:07:08.480 Latency(us) 00:07:08.480 Device Information : IOPS MiB/s Average min max 00:07:08.480 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19351.60 75.59 3306.70 608.55 20056.65 00:07:08.480 ======================================================== 00:07:08.480 Total : 19351.60 75.59 3306.70 608.55 20056.65 00:07:08.480 00:07:08.480 05:22:11 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:08.480 05:22:11 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:08.480 05:22:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:08.480 05:22:11 -- nvmf/common.sh@116 -- # sync 00:07:08.480 05:22:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:08.480 05:22:11 -- nvmf/common.sh@119 -- # set +e 00:07:08.480 05:22:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:08.480 05:22:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:08.480 rmmod nvme_tcp 00:07:08.740 rmmod nvme_fabrics 00:07:08.740 rmmod nvme_keyring 00:07:08.740 05:22:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:08.740 05:22:11 -- nvmf/common.sh@123 -- # set -e 00:07:08.741 05:22:11 -- nvmf/common.sh@124 -- # return 0 00:07:08.741 05:22:11 -- nvmf/common.sh@477 -- # '[' -n 1631076 ']' 00:07:08.741 05:22:11 -- nvmf/common.sh@478 -- # killprocess 1631076 00:07:08.741 05:22:11 -- common/autotest_common.sh@936 -- # '[' -z 1631076 ']' 00:07:08.741 05:22:11 -- common/autotest_common.sh@940 -- # kill -0 1631076 00:07:08.741 05:22:11 -- common/autotest_common.sh@941 -- # uname 00:07:08.741 05:22:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:08.741 05:22:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1631076 00:07:08.741 05:22:11 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:07:08.741 05:22:11 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:07:08.741 05:22:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1631076' 00:07:08.741 killing process with pid 1631076 00:07:08.741 05:22:11 -- common/autotest_common.sh@955 -- # kill 1631076 00:07:08.741 05:22:11 -- common/autotest_common.sh@960 -- # wait 1631076 00:07:08.741 nvmf threads initialize successfully 00:07:08.741 bdev subsystem init successfully 00:07:08.741 created a nvmf target service 00:07:08.741 create targets's poll groups done 00:07:08.741 all subsystems of target started 00:07:08.741 nvmf target is running 00:07:08.741 all subsystems of target stopped 00:07:08.741 destroy targets's poll groups done 00:07:08.741 destroyed the nvmf target service 00:07:08.741 bdev subsystem finish successfully 00:07:08.741 nvmf threads destroy successfully 00:07:08.741 05:22:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:08.741 05:22:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:08.741 05:22:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:08.741 05:22:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:08.741 05:22:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:08.741 05:22:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.741 05:22:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:08.741 05:22:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.292 05:22:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:11.292 05:22:14 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:11.292 05:22:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:11.292 05:22:14 -- common/autotest_common.sh@10 -- # set +x 00:07:11.292 00:07:11.292 real 0m21.319s 00:07:11.292 user 0m46.453s 00:07:11.293 sys 0m6.841s 00:07:11.293 05:22:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:11.293 05:22:14 -- common/autotest_common.sh@10 -- # set +x 00:07:11.293 ************************************ 00:07:11.293 END TEST nvmf_example 00:07:11.293 ************************************ 00:07:11.293 05:22:14 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:11.293 05:22:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:11.293 05:22:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:11.293 05:22:14 -- common/autotest_common.sh@10 -- # set +x 00:07:11.293 ************************************ 00:07:11.293 START TEST nvmf_filesystem 00:07:11.293 ************************************ 00:07:11.293 05:22:14 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:11.293 * Looking for test storage... 00:07:11.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.293 05:22:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:11.293 05:22:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:11.293 05:22:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:11.293 05:22:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:11.293 05:22:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:11.293 05:22:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:11.293 05:22:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:11.293 05:22:14 -- scripts/common.sh@335 -- # IFS=.-: 00:07:11.293 05:22:14 -- scripts/common.sh@335 -- # read -ra ver1 00:07:11.293 05:22:14 -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.293 05:22:14 -- scripts/common.sh@336 -- # read -ra ver2 00:07:11.293 05:22:14 -- scripts/common.sh@337 -- # local 'op=<' 00:07:11.293 05:22:14 -- scripts/common.sh@339 -- # ver1_l=2 00:07:11.293 05:22:14 -- scripts/common.sh@340 -- # ver2_l=1 00:07:11.293 05:22:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:11.293 05:22:14 -- scripts/common.sh@343 -- # case "$op" in 00:07:11.293 05:22:14 -- scripts/common.sh@344 -- # : 1 00:07:11.293 05:22:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:11.293 05:22:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.293 05:22:14 -- scripts/common.sh@364 -- # decimal 1 00:07:11.293 05:22:14 -- scripts/common.sh@352 -- # local d=1 00:07:11.293 05:22:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.293 05:22:14 -- scripts/common.sh@354 -- # echo 1 00:07:11.293 05:22:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:11.293 05:22:14 -- scripts/common.sh@365 -- # decimal 2 00:07:11.293 05:22:14 -- scripts/common.sh@352 -- # local d=2 00:07:11.293 05:22:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.293 05:22:14 -- scripts/common.sh@354 -- # echo 2 00:07:11.293 05:22:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:11.293 05:22:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:11.293 05:22:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:11.293 05:22:14 -- scripts/common.sh@367 -- # return 0 00:07:11.293 05:22:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.293 05:22:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:11.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.293 --rc genhtml_branch_coverage=1 00:07:11.293 --rc genhtml_function_coverage=1 00:07:11.293 --rc genhtml_legend=1 00:07:11.293 --rc geninfo_all_blocks=1 00:07:11.293 --rc geninfo_unexecuted_blocks=1 00:07:11.293 00:07:11.293 ' 00:07:11.293 05:22:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:11.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.293 --rc genhtml_branch_coverage=1 00:07:11.293 --rc genhtml_function_coverage=1 00:07:11.293 --rc genhtml_legend=1 00:07:11.293 --rc geninfo_all_blocks=1 00:07:11.293 --rc geninfo_unexecuted_blocks=1 00:07:11.293 00:07:11.293 ' 00:07:11.293 05:22:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:11.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.293 --rc genhtml_branch_coverage=1 00:07:11.293 --rc genhtml_function_coverage=1 00:07:11.293 --rc genhtml_legend=1 00:07:11.293 --rc geninfo_all_blocks=1 00:07:11.293 --rc geninfo_unexecuted_blocks=1 00:07:11.293 00:07:11.293 ' 00:07:11.293 05:22:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:11.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.293 --rc genhtml_branch_coverage=1 00:07:11.293 --rc genhtml_function_coverage=1 00:07:11.293 --rc genhtml_legend=1 00:07:11.293 --rc geninfo_all_blocks=1 00:07:11.293 --rc geninfo_unexecuted_blocks=1 00:07:11.293 00:07:11.293 ' 00:07:11.293 05:22:14 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:11.293 05:22:14 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:11.293 05:22:14 -- common/autotest_common.sh@34 -- # set -e 00:07:11.293 05:22:14 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:11.293 05:22:14 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:11.293 05:22:14 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:11.293 05:22:14 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:11.293 05:22:14 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:11.293 05:22:14 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:11.293 05:22:14 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:11.293 05:22:14 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:11.293 05:22:14 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:11.293 05:22:14 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:11.293 05:22:14 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:11.293 05:22:14 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:11.293 05:22:14 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:11.293 05:22:14 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:11.293 05:22:14 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:11.293 05:22:14 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:11.293 05:22:14 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:11.293 05:22:14 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:11.293 05:22:14 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:11.293 05:22:14 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:11.293 05:22:14 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:11.293 05:22:14 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:11.293 05:22:14 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:11.293 05:22:14 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:11.293 05:22:14 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:11.293 05:22:14 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:11.293 05:22:14 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:11.293 05:22:14 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:11.293 05:22:14 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:11.293 05:22:14 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:11.293 05:22:14 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:11.293 05:22:14 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:11.293 05:22:14 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:11.294 05:22:14 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:11.294 05:22:14 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:11.294 05:22:14 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:11.294 05:22:14 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:11.294 05:22:14 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:11.294 05:22:14 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:11.294 05:22:14 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:11.294 05:22:14 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:11.294 05:22:14 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:11.294 05:22:14 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:11.294 05:22:14 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:11.294 05:22:14 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:11.294 05:22:14 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:11.294 05:22:14 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:11.294 05:22:14 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:11.294 05:22:14 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:11.294 05:22:14 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:11.294 05:22:14 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:11.294 05:22:14 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:11.294 05:22:14 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:11.294 05:22:14 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:11.294 05:22:14 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:07:11.294 05:22:14 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:11.294 05:22:14 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:07:11.294 05:22:14 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:11.294 05:22:14 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:11.294 05:22:14 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:11.294 05:22:14 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:11.294 05:22:14 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:07:11.294 05:22:14 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:11.294 05:22:14 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:11.294 05:22:14 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:07:11.294 05:22:14 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:11.294 05:22:14 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:11.294 05:22:14 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:11.294 05:22:14 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:11.294 05:22:14 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:11.294 05:22:14 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:11.294 05:22:14 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:07:11.294 05:22:14 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:11.294 05:22:14 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:11.294 05:22:14 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:11.294 05:22:14 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:11.294 05:22:14 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:11.294 05:22:14 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:11.294 05:22:14 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:11.294 05:22:14 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:11.294 05:22:14 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:11.294 05:22:14 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:11.294 05:22:14 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:07:11.294 05:22:14 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:11.294 05:22:14 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:11.294 05:22:14 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:11.294 05:22:14 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:11.294 05:22:14 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:11.294 05:22:14 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:11.294 05:22:14 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:11.294 05:22:14 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:11.294 05:22:14 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:11.294 05:22:14 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:11.294 05:22:14 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:11.294 05:22:14 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:11.294 05:22:14 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:11.294 05:22:14 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:11.294 05:22:14 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:11.294 05:22:14 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:11.294 #define SPDK_CONFIG_H 00:07:11.294 #define SPDK_CONFIG_APPS 1 00:07:11.294 #define SPDK_CONFIG_ARCH native 00:07:11.294 #undef SPDK_CONFIG_ASAN 00:07:11.294 #undef SPDK_CONFIG_AVAHI 00:07:11.294 #undef SPDK_CONFIG_CET 00:07:11.294 #define SPDK_CONFIG_COVERAGE 1 00:07:11.294 #define SPDK_CONFIG_CROSS_PREFIX 00:07:11.294 #undef SPDK_CONFIG_CRYPTO 00:07:11.294 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:11.294 #undef SPDK_CONFIG_CUSTOMOCF 00:07:11.294 #undef SPDK_CONFIG_DAOS 00:07:11.294 #define SPDK_CONFIG_DAOS_DIR 00:07:11.294 #define SPDK_CONFIG_DEBUG 1 00:07:11.294 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:11.294 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:11.294 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:11.294 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:11.294 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:11.294 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:11.294 #define SPDK_CONFIG_EXAMPLES 1 00:07:11.294 #undef SPDK_CONFIG_FC 00:07:11.294 #define SPDK_CONFIG_FC_PATH 00:07:11.294 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:11.294 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:11.294 #undef SPDK_CONFIG_FUSE 00:07:11.294 #undef SPDK_CONFIG_FUZZER 00:07:11.294 #define SPDK_CONFIG_FUZZER_LIB 00:07:11.294 #undef SPDK_CONFIG_GOLANG 00:07:11.294 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:11.294 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:11.294 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:11.294 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:11.294 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:11.294 #define SPDK_CONFIG_IDXD 1 00:07:11.294 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:11.294 #undef SPDK_CONFIG_IPSEC_MB 00:07:11.294 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:11.294 #define SPDK_CONFIG_ISAL 1 00:07:11.294 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:11.294 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:11.294 #define SPDK_CONFIG_LIBDIR 00:07:11.294 #undef SPDK_CONFIG_LTO 00:07:11.294 #define SPDK_CONFIG_MAX_LCORES 00:07:11.294 #define SPDK_CONFIG_NVME_CUSE 1 00:07:11.294 #undef SPDK_CONFIG_OCF 00:07:11.294 #define SPDK_CONFIG_OCF_PATH 00:07:11.294 #define SPDK_CONFIG_OPENSSL_PATH 00:07:11.294 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:11.294 #undef SPDK_CONFIG_PGO_USE 00:07:11.294 #define SPDK_CONFIG_PREFIX /usr/local 00:07:11.294 #undef SPDK_CONFIG_RAID5F 00:07:11.294 #undef SPDK_CONFIG_RBD 00:07:11.294 #define SPDK_CONFIG_RDMA 1 00:07:11.295 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:11.295 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:11.295 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:11.295 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:11.295 #define SPDK_CONFIG_SHARED 1 00:07:11.295 #undef SPDK_CONFIG_SMA 00:07:11.295 #define SPDK_CONFIG_TESTS 1 00:07:11.295 #undef SPDK_CONFIG_TSAN 00:07:11.295 #define SPDK_CONFIG_UBLK 1 00:07:11.295 #define SPDK_CONFIG_UBSAN 1 00:07:11.295 #undef SPDK_CONFIG_UNIT_TESTS 00:07:11.295 #undef SPDK_CONFIG_URING 00:07:11.295 #define SPDK_CONFIG_URING_PATH 00:07:11.295 #undef SPDK_CONFIG_URING_ZNS 00:07:11.295 #undef SPDK_CONFIG_USDT 00:07:11.295 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:11.295 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:11.295 #undef SPDK_CONFIG_VFIO_USER 00:07:11.295 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:11.295 #define SPDK_CONFIG_VHOST 1 00:07:11.295 #define SPDK_CONFIG_VIRTIO 1 00:07:11.295 #undef SPDK_CONFIG_VTUNE 00:07:11.295 #define SPDK_CONFIG_VTUNE_DIR 00:07:11.295 #define SPDK_CONFIG_WERROR 1 00:07:11.295 #define SPDK_CONFIG_WPDK_DIR 00:07:11.295 #undef SPDK_CONFIG_XNVME 00:07:11.295 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:11.295 05:22:14 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:11.295 05:22:14 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:11.295 05:22:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.295 05:22:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.295 05:22:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.295 05:22:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.295 05:22:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.295 05:22:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.295 05:22:14 -- paths/export.sh@5 -- # export PATH 00:07:11.295 05:22:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.295 05:22:14 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:11.295 05:22:14 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:11.295 05:22:14 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:11.295 05:22:14 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:11.295 05:22:14 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:11.295 05:22:14 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:11.295 05:22:14 -- pm/common@16 -- # TEST_TAG=N/A 00:07:11.295 05:22:14 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:11.295 05:22:14 -- common/autotest_common.sh@52 -- # : 1 00:07:11.295 05:22:14 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:07:11.295 05:22:14 -- common/autotest_common.sh@56 -- # : 0 00:07:11.295 05:22:14 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:11.295 05:22:14 -- common/autotest_common.sh@58 -- # : 0 00:07:11.295 05:22:14 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:07:11.295 05:22:14 -- common/autotest_common.sh@60 -- # : 1 00:07:11.295 05:22:14 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:11.295 05:22:14 -- common/autotest_common.sh@62 -- # : 0 00:07:11.295 05:22:14 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:07:11.295 05:22:14 -- common/autotest_common.sh@64 -- # : 00:07:11.295 05:22:14 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:07:11.295 05:22:14 -- common/autotest_common.sh@66 -- # : 0 00:07:11.295 05:22:14 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:07:11.295 05:22:14 -- common/autotest_common.sh@68 -- # : 0 00:07:11.295 05:22:14 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:07:11.295 05:22:14 -- common/autotest_common.sh@70 -- # : 0 00:07:11.295 05:22:14 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:07:11.295 05:22:14 -- common/autotest_common.sh@72 -- # : 0 00:07:11.295 05:22:14 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:11.295 05:22:14 -- common/autotest_common.sh@74 -- # : 0 00:07:11.295 05:22:14 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:07:11.295 05:22:14 -- common/autotest_common.sh@76 -- # : 0 00:07:11.295 05:22:14 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:07:11.295 05:22:14 -- common/autotest_common.sh@78 -- # : 0 00:07:11.295 05:22:14 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:07:11.295 05:22:14 -- common/autotest_common.sh@80 -- # : 1 00:07:11.295 05:22:14 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:07:11.295 05:22:14 -- common/autotest_common.sh@82 -- # : 0 00:07:11.295 05:22:14 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:07:11.295 05:22:14 -- common/autotest_common.sh@84 -- # : 0 00:07:11.295 05:22:14 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:07:11.295 05:22:14 -- common/autotest_common.sh@86 -- # : 1 00:07:11.295 05:22:14 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:07:11.295 05:22:14 -- common/autotest_common.sh@88 -- # : 0 00:07:11.295 05:22:14 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:07:11.295 05:22:14 -- common/autotest_common.sh@90 -- # : 0 00:07:11.295 05:22:14 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:11.295 05:22:14 -- common/autotest_common.sh@92 -- # : 0 00:07:11.295 05:22:14 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:07:11.295 05:22:14 -- common/autotest_common.sh@94 -- # : 0 00:07:11.295 05:22:14 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:07:11.295 05:22:14 -- common/autotest_common.sh@96 -- # : tcp 00:07:11.295 05:22:14 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:11.295 05:22:14 -- common/autotest_common.sh@98 -- # : 0 00:07:11.295 05:22:14 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:07:11.295 05:22:14 -- common/autotest_common.sh@100 -- # : 0 00:07:11.295 05:22:14 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:07:11.295 05:22:14 -- common/autotest_common.sh@102 -- # : 0 00:07:11.295 05:22:14 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:07:11.295 05:22:14 -- common/autotest_common.sh@104 -- # : 0 00:07:11.296 05:22:14 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:07:11.296 05:22:14 -- common/autotest_common.sh@106 -- # : 0 00:07:11.296 05:22:14 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:07:11.296 05:22:14 -- common/autotest_common.sh@108 -- # : 0 00:07:11.296 05:22:14 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:07:11.296 05:22:14 -- common/autotest_common.sh@110 -- # : 0 00:07:11.296 05:22:14 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:07:11.296 05:22:14 -- common/autotest_common.sh@112 -- # : 0 00:07:11.296 05:22:14 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:11.296 05:22:14 -- common/autotest_common.sh@114 -- # : 0 00:07:11.296 05:22:14 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:07:11.296 05:22:14 -- common/autotest_common.sh@116 -- # : 1 00:07:11.296 05:22:14 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:07:11.296 05:22:14 -- common/autotest_common.sh@118 -- # : 00:07:11.296 05:22:14 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:11.296 05:22:14 -- common/autotest_common.sh@120 -- # : 0 00:07:11.296 05:22:14 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:07:11.296 05:22:14 -- common/autotest_common.sh@122 -- # : 0 00:07:11.296 05:22:14 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:07:11.296 05:22:14 -- common/autotest_common.sh@124 -- # : 0 00:07:11.296 05:22:14 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:07:11.296 05:22:14 -- common/autotest_common.sh@126 -- # : 0 00:07:11.296 05:22:14 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:07:11.296 05:22:14 -- common/autotest_common.sh@128 -- # : 0 00:07:11.296 05:22:14 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:07:11.296 05:22:14 -- common/autotest_common.sh@130 -- # : 0 00:07:11.296 05:22:14 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:07:11.296 05:22:14 -- common/autotest_common.sh@132 -- # : 00:07:11.296 05:22:14 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:07:11.296 05:22:14 -- common/autotest_common.sh@134 -- # : true 00:07:11.296 05:22:14 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:07:11.296 05:22:14 -- common/autotest_common.sh@136 -- # : 0 00:07:11.296 05:22:14 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:07:11.296 05:22:14 -- common/autotest_common.sh@138 -- # : 0 00:07:11.296 05:22:14 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:07:11.296 05:22:14 -- common/autotest_common.sh@140 -- # : 0 00:07:11.296 05:22:14 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:07:11.296 05:22:14 -- common/autotest_common.sh@142 -- # : 0 00:07:11.296 05:22:14 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:07:11.296 05:22:14 -- common/autotest_common.sh@144 -- # : 0 00:07:11.296 05:22:14 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:07:11.296 05:22:14 -- common/autotest_common.sh@146 -- # : 0 00:07:11.296 05:22:14 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:07:11.296 05:22:14 -- common/autotest_common.sh@148 -- # : e810 00:07:11.296 05:22:14 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:07:11.296 05:22:14 -- common/autotest_common.sh@150 -- # : 0 00:07:11.296 05:22:14 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:07:11.296 05:22:14 -- common/autotest_common.sh@152 -- # : 0 00:07:11.296 05:22:14 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:07:11.296 05:22:14 -- common/autotest_common.sh@154 -- # : 0 00:07:11.296 05:22:14 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:07:11.296 05:22:14 -- common/autotest_common.sh@156 -- # : 0 00:07:11.296 05:22:14 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:07:11.296 05:22:14 -- common/autotest_common.sh@158 -- # : 0 00:07:11.296 05:22:14 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:07:11.296 05:22:14 -- common/autotest_common.sh@160 -- # : 0 00:07:11.296 05:22:14 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:07:11.296 05:22:14 -- common/autotest_common.sh@163 -- # : 00:07:11.296 05:22:14 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:07:11.296 05:22:14 -- common/autotest_common.sh@165 -- # : 0 00:07:11.296 05:22:14 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:07:11.296 05:22:14 -- common/autotest_common.sh@167 -- # : 0 00:07:11.296 05:22:14 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:11.296 05:22:14 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:11.296 05:22:14 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:11.296 05:22:14 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:11.296 05:22:14 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:11.296 05:22:14 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:11.296 05:22:14 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:11.296 05:22:14 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:11.296 05:22:14 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:11.296 05:22:14 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:11.296 05:22:14 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:11.296 05:22:14 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:11.296 05:22:14 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:11.296 05:22:14 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:11.296 05:22:14 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:07:11.296 05:22:14 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:11.296 05:22:14 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:11.296 05:22:14 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:11.296 05:22:14 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:11.296 05:22:14 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:11.297 05:22:14 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:07:11.297 05:22:14 -- common/autotest_common.sh@196 -- # cat 00:07:11.297 05:22:14 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:07:11.297 05:22:14 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:11.297 05:22:14 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:11.297 05:22:14 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:11.297 05:22:14 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:11.297 05:22:14 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:07:11.297 05:22:14 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:07:11.297 05:22:14 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:11.297 05:22:14 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:11.297 05:22:14 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:11.297 05:22:14 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:11.297 05:22:14 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:11.297 05:22:14 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:11.297 05:22:14 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:11.297 05:22:14 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:11.297 05:22:14 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:11.297 05:22:14 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:11.297 05:22:14 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:11.297 05:22:14 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:11.297 05:22:14 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:07:11.297 05:22:14 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:07:11.297 05:22:14 -- common/autotest_common.sh@249 -- # _LCOV= 00:07:11.297 05:22:14 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:07:11.297 05:22:14 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:07:11.297 05:22:14 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:11.297 05:22:14 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:07:11.297 05:22:14 -- common/autotest_common.sh@255 -- # lcov_opt= 00:07:11.297 05:22:14 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:07:11.297 05:22:14 -- common/autotest_common.sh@259 -- # export valgrind= 00:07:11.297 05:22:14 -- common/autotest_common.sh@259 -- # valgrind= 00:07:11.297 05:22:14 -- common/autotest_common.sh@265 -- # uname -s 00:07:11.297 05:22:14 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:07:11.297 05:22:14 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:07:11.297 05:22:14 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:07:11.297 05:22:14 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:07:11.297 05:22:14 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:07:11.297 05:22:14 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:07:11.297 05:22:14 -- common/autotest_common.sh@275 -- # MAKE=make 00:07:11.297 05:22:14 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j144 00:07:11.297 05:22:14 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:07:11.297 05:22:14 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:07:11.297 05:22:14 -- common/autotest_common.sh@294 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:11.297 05:22:14 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:07:11.297 05:22:14 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:07:11.297 05:22:14 -- common/autotest_common.sh@301 -- # for i in "$@" 00:07:11.297 05:22:14 -- common/autotest_common.sh@302 -- # case "$i" in 00:07:11.297 05:22:14 -- common/autotest_common.sh@307 -- # TEST_TRANSPORT=tcp 00:07:11.297 05:22:14 -- common/autotest_common.sh@319 -- # [[ -z 1633896 ]] 00:07:11.297 05:22:14 -- common/autotest_common.sh@319 -- # kill -0 1633896 00:07:11.297 05:22:14 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:07:11.297 05:22:14 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:07:11.297 05:22:14 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:07:11.297 05:22:14 -- common/autotest_common.sh@332 -- # local mount target_dir 00:07:11.297 05:22:14 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:07:11.297 05:22:14 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:07:11.297 05:22:14 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:07:11.297 05:22:14 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:07:11.297 05:22:14 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.yKV2kI 00:07:11.297 05:22:14 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:11.297 05:22:14 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:07:11.297 05:22:14 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:07:11.297 05:22:14 -- common/autotest_common.sh@356 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.yKV2kI/tests/target /tmp/spdk.yKV2kI 00:07:11.297 05:22:14 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:07:11.297 05:22:14 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:11.297 05:22:14 -- common/autotest_common.sh@328 -- # df -T 00:07:11.297 05:22:14 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:07:11.297 05:22:14 -- common/autotest_common.sh@362 -- # mounts["$mount"]=spdk_devtmpfs 00:07:11.297 05:22:14 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:07:11.297 05:22:14 -- common/autotest_common.sh@363 -- # avails["$mount"]=67108864 00:07:11.297 05:22:14 -- common/autotest_common.sh@363 -- # sizes["$mount"]=67108864 00:07:11.297 05:22:14 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:07:11.297 05:22:14 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:11.297 05:22:14 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/pmem0 00:07:11.297 05:22:14 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext2 00:07:11.297 05:22:14 -- common/autotest_common.sh@363 -- # avails["$mount"]=4096 00:07:11.297 05:22:14 -- common/autotest_common.sh@363 -- # sizes["$mount"]=5284429824 00:07:11.297 05:22:14 -- common/autotest_common.sh@364 -- # uses["$mount"]=5284425728 00:07:11.297 05:22:14 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:11.297 05:22:14 -- common/autotest_common.sh@362 -- # mounts["$mount"]=spdk_root 00:07:11.297 05:22:14 -- common/autotest_common.sh@362 -- # fss["$mount"]=overlay 00:07:11.297 05:22:14 -- common/autotest_common.sh@363 -- # avails["$mount"]=123561607168 00:07:11.297 05:22:14 -- common/autotest_common.sh@363 -- # sizes["$mount"]=129356558336 00:07:11.297 05:22:14 -- common/autotest_common.sh@364 -- # uses["$mount"]=5794951168 00:07:11.297 05:22:14 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:11.297 05:22:14 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:11.297 05:22:14 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:11.297 05:22:14 -- common/autotest_common.sh@363 -- # avails["$mount"]=64677019648 00:07:11.297 05:22:14 -- common/autotest_common.sh@363 -- # sizes["$mount"]=64678277120 00:07:11.297 05:22:14 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:07:11.297 05:22:14 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:11.297 05:22:14 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:11.297 05:22:14 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:11.297 05:22:14 -- common/autotest_common.sh@363 -- # avails["$mount"]=25861545984 00:07:11.297 05:22:14 -- common/autotest_common.sh@363 -- # sizes["$mount"]=25871314944 00:07:11.298 05:22:14 -- common/autotest_common.sh@364 -- # uses["$mount"]=9768960 00:07:11.298 05:22:14 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:11.298 05:22:14 -- common/autotest_common.sh@362 -- # mounts["$mount"]=efivarfs 00:07:11.298 05:22:14 -- common/autotest_common.sh@362 -- # fss["$mount"]=efivarfs 00:07:11.298 05:22:14 -- common/autotest_common.sh@363 -- # avails["$mount"]=175104 00:07:11.298 05:22:14 -- common/autotest_common.sh@363 -- # sizes["$mount"]=507904 00:07:11.298 05:22:14 -- common/autotest_common.sh@364 -- # uses["$mount"]=328704 00:07:11.298 05:22:14 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:11.298 05:22:14 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:11.298 05:22:14 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:11.298 05:22:14 -- common/autotest_common.sh@363 -- # avails["$mount"]=64677806080 00:07:11.298 05:22:14 -- common/autotest_common.sh@363 -- # sizes["$mount"]=64678281216 00:07:11.298 05:22:14 -- common/autotest_common.sh@364 -- # uses["$mount"]=475136 00:07:11.298 05:22:14 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:11.298 05:22:14 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:11.298 05:22:14 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:11.298 05:22:14 -- common/autotest_common.sh@363 -- # avails["$mount"]=12935643136 00:07:11.298 05:22:14 -- common/autotest_common.sh@363 -- # sizes["$mount"]=12935655424 00:07:11.298 05:22:14 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:07:11.298 05:22:14 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:11.298 05:22:14 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:07:11.298 * Looking for test storage... 00:07:11.298 05:22:14 -- common/autotest_common.sh@369 -- # local target_space new_size 00:07:11.298 05:22:14 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:07:11.298 05:22:14 -- common/autotest_common.sh@373 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.298 05:22:14 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:11.298 05:22:14 -- common/autotest_common.sh@373 -- # mount=/ 00:07:11.298 05:22:14 -- common/autotest_common.sh@375 -- # target_space=123561607168 00:07:11.298 05:22:14 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:07:11.298 05:22:14 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:07:11.298 05:22:14 -- common/autotest_common.sh@381 -- # [[ overlay == tmpfs ]] 00:07:11.298 05:22:14 -- common/autotest_common.sh@381 -- # [[ overlay == ramfs ]] 00:07:11.298 05:22:14 -- common/autotest_common.sh@381 -- # [[ / == / ]] 00:07:11.298 05:22:14 -- common/autotest_common.sh@382 -- # new_size=8009543680 00:07:11.298 05:22:14 -- common/autotest_common.sh@383 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:11.298 05:22:14 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.298 05:22:14 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.298 05:22:14 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.298 05:22:14 -- common/autotest_common.sh@390 -- # return 0 00:07:11.298 05:22:14 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:07:11.298 05:22:14 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:07:11.298 05:22:14 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:11.298 05:22:14 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:11.298 05:22:14 -- common/autotest_common.sh@1682 -- # true 00:07:11.298 05:22:14 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:07:11.298 05:22:14 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:11.298 05:22:14 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:11.298 05:22:14 -- common/autotest_common.sh@27 -- # exec 00:07:11.298 05:22:14 -- common/autotest_common.sh@29 -- # exec 00:07:11.298 05:22:14 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:11.298 05:22:14 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:11.298 05:22:14 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:11.298 05:22:14 -- common/autotest_common.sh@18 -- # set -x 00:07:11.298 05:22:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:11.298 05:22:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:11.298 05:22:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:11.298 05:22:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:11.298 05:22:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:11.298 05:22:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:11.298 05:22:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:11.298 05:22:14 -- scripts/common.sh@335 -- # IFS=.-: 00:07:11.298 05:22:14 -- scripts/common.sh@335 -- # read -ra ver1 00:07:11.298 05:22:14 -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.298 05:22:14 -- scripts/common.sh@336 -- # read -ra ver2 00:07:11.298 05:22:14 -- scripts/common.sh@337 -- # local 'op=<' 00:07:11.298 05:22:14 -- scripts/common.sh@339 -- # ver1_l=2 00:07:11.298 05:22:14 -- scripts/common.sh@340 -- # ver2_l=1 00:07:11.298 05:22:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:11.298 05:22:14 -- scripts/common.sh@343 -- # case "$op" in 00:07:11.298 05:22:14 -- scripts/common.sh@344 -- # : 1 00:07:11.298 05:22:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:11.298 05:22:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.298 05:22:14 -- scripts/common.sh@364 -- # decimal 1 00:07:11.298 05:22:14 -- scripts/common.sh@352 -- # local d=1 00:07:11.298 05:22:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.298 05:22:14 -- scripts/common.sh@354 -- # echo 1 00:07:11.298 05:22:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:11.298 05:22:14 -- scripts/common.sh@365 -- # decimal 2 00:07:11.298 05:22:14 -- scripts/common.sh@352 -- # local d=2 00:07:11.298 05:22:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.298 05:22:14 -- scripts/common.sh@354 -- # echo 2 00:07:11.298 05:22:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:11.298 05:22:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:11.298 05:22:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:11.298 05:22:14 -- scripts/common.sh@367 -- # return 0 00:07:11.298 05:22:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.298 05:22:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:11.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.298 --rc genhtml_branch_coverage=1 00:07:11.298 --rc genhtml_function_coverage=1 00:07:11.298 --rc genhtml_legend=1 00:07:11.298 --rc geninfo_all_blocks=1 00:07:11.298 --rc geninfo_unexecuted_blocks=1 00:07:11.298 00:07:11.298 ' 00:07:11.298 05:22:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:11.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.298 --rc genhtml_branch_coverage=1 00:07:11.298 --rc genhtml_function_coverage=1 00:07:11.298 --rc genhtml_legend=1 00:07:11.298 --rc geninfo_all_blocks=1 00:07:11.298 --rc geninfo_unexecuted_blocks=1 00:07:11.298 00:07:11.298 ' 00:07:11.298 05:22:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:11.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.298 --rc genhtml_branch_coverage=1 00:07:11.298 --rc genhtml_function_coverage=1 00:07:11.298 --rc genhtml_legend=1 00:07:11.298 --rc geninfo_all_blocks=1 00:07:11.298 --rc geninfo_unexecuted_blocks=1 00:07:11.298 00:07:11.298 ' 00:07:11.298 05:22:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:11.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.299 --rc genhtml_branch_coverage=1 00:07:11.299 --rc genhtml_function_coverage=1 00:07:11.299 --rc genhtml_legend=1 00:07:11.299 --rc geninfo_all_blocks=1 00:07:11.299 --rc geninfo_unexecuted_blocks=1 00:07:11.299 00:07:11.299 ' 00:07:11.299 05:22:14 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:11.299 05:22:14 -- nvmf/common.sh@7 -- # uname -s 00:07:11.299 05:22:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:11.299 05:22:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:11.299 05:22:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:11.299 05:22:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:11.299 05:22:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:11.299 05:22:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:11.299 05:22:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:11.299 05:22:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:11.299 05:22:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:11.299 05:22:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:11.299 05:22:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:11.299 05:22:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:11.299 05:22:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:11.299 05:22:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:11.299 05:22:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:11.299 05:22:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:11.562 05:22:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.562 05:22:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.562 05:22:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.562 05:22:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.562 05:22:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.562 05:22:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.562 05:22:14 -- paths/export.sh@5 -- # export PATH 00:07:11.562 05:22:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.562 05:22:14 -- nvmf/common.sh@46 -- # : 0 00:07:11.562 05:22:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:11.562 05:22:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:11.562 05:22:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:11.562 05:22:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:11.562 05:22:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:11.562 05:22:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:11.562 05:22:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:11.562 05:22:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:11.562 05:22:14 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:11.562 05:22:14 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:11.562 05:22:14 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:11.562 05:22:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:11.562 05:22:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:11.562 05:22:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:11.562 05:22:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:11.562 05:22:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:11.562 05:22:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.562 05:22:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:11.562 05:22:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.562 05:22:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:11.562 05:22:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:11.562 05:22:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:11.562 05:22:14 -- common/autotest_common.sh@10 -- # set +x 00:07:19.703 05:22:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:19.703 05:22:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:19.703 05:22:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:19.703 05:22:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:19.703 05:22:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:19.703 05:22:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:19.703 05:22:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:19.703 05:22:21 -- nvmf/common.sh@294 -- # net_devs=() 00:07:19.703 05:22:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:19.703 05:22:21 -- nvmf/common.sh@295 -- # e810=() 00:07:19.703 05:22:21 -- nvmf/common.sh@295 -- # local -ga e810 00:07:19.703 05:22:21 -- nvmf/common.sh@296 -- # x722=() 00:07:19.703 05:22:21 -- nvmf/common.sh@296 -- # local -ga x722 00:07:19.703 05:22:21 -- nvmf/common.sh@297 -- # mlx=() 00:07:19.703 05:22:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:19.703 05:22:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:19.703 05:22:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:19.703 05:22:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:19.703 05:22:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:19.703 05:22:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:19.703 05:22:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:19.703 05:22:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:19.703 05:22:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:19.703 05:22:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:19.703 05:22:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:19.703 05:22:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:19.703 05:22:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:19.703 05:22:21 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:19.703 05:22:21 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:19.703 05:22:21 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:19.703 05:22:21 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:19.703 05:22:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:19.703 05:22:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:19.703 05:22:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:19.703 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:19.703 05:22:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:19.703 05:22:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:19.703 05:22:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.703 05:22:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.703 05:22:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:19.703 05:22:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:19.703 05:22:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:19.703 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:19.703 05:22:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:19.703 05:22:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:19.703 05:22:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.703 05:22:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.703 05:22:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:19.703 05:22:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:19.703 05:22:21 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:19.703 05:22:21 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:19.703 05:22:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:19.703 05:22:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.703 05:22:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:19.704 05:22:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.704 05:22:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:19.704 Found net devices under 0000:31:00.0: cvl_0_0 00:07:19.704 05:22:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.704 05:22:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:19.704 05:22:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.704 05:22:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:19.704 05:22:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.704 05:22:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:19.704 Found net devices under 0000:31:00.1: cvl_0_1 00:07:19.704 05:22:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.704 05:22:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:19.704 05:22:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:19.704 05:22:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:19.704 05:22:21 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:19.704 05:22:21 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:19.704 05:22:21 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:19.704 05:22:21 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:19.704 05:22:21 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:19.704 05:22:21 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:19.704 05:22:21 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:19.704 05:22:21 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:19.704 05:22:21 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:19.704 05:22:21 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:19.704 05:22:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:19.704 05:22:21 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:19.704 05:22:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:19.704 05:22:21 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:19.704 05:22:21 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:19.704 05:22:21 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:19.704 05:22:21 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:19.704 05:22:21 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:19.704 05:22:21 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:19.704 05:22:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:19.704 05:22:21 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:19.704 05:22:21 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:19.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:19.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.696 ms 00:07:19.704 00:07:19.704 --- 10.0.0.2 ping statistics --- 00:07:19.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.704 rtt min/avg/max/mdev = 0.696/0.696/0.696/0.000 ms 00:07:19.704 05:22:21 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:19.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:19.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:07:19.704 00:07:19.704 --- 10.0.0.1 ping statistics --- 00:07:19.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.704 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:07:19.704 05:22:21 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:19.704 05:22:21 -- nvmf/common.sh@410 -- # return 0 00:07:19.704 05:22:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:19.704 05:22:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:19.704 05:22:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:19.704 05:22:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:19.704 05:22:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:19.704 05:22:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:19.704 05:22:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:19.704 05:22:21 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:19.704 05:22:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:19.704 05:22:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.704 05:22:21 -- common/autotest_common.sh@10 -- # set +x 00:07:19.704 ************************************ 00:07:19.704 START TEST nvmf_filesystem_no_in_capsule 00:07:19.704 ************************************ 00:07:19.704 05:22:21 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 0 00:07:19.704 05:22:21 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:19.704 05:22:21 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:19.704 05:22:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:19.704 05:22:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:19.704 05:22:21 -- common/autotest_common.sh@10 -- # set +x 00:07:19.704 05:22:21 -- nvmf/common.sh@469 -- # nvmfpid=1637700 00:07:19.704 05:22:21 -- nvmf/common.sh@470 -- # waitforlisten 1637700 00:07:19.704 05:22:21 -- common/autotest_common.sh@829 -- # '[' -z 1637700 ']' 00:07:19.704 05:22:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:19.704 05:22:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.704 05:22:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:19.704 05:22:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.704 05:22:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:19.704 05:22:21 -- common/autotest_common.sh@10 -- # set +x 00:07:19.704 [2024-12-07 05:22:21.999615] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:19.704 [2024-12-07 05:22:21.999681] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:19.704 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.704 [2024-12-07 05:22:22.076378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:19.704 [2024-12-07 05:22:22.151349] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:19.704 [2024-12-07 05:22:22.151482] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:19.704 [2024-12-07 05:22:22.151492] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:19.704 [2024-12-07 05:22:22.151501] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:19.704 [2024-12-07 05:22:22.151640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.704 [2024-12-07 05:22:22.151751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.704 [2024-12-07 05:22:22.151910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:19.704 [2024-12-07 05:22:22.151911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.704 05:22:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.704 05:22:22 -- common/autotest_common.sh@862 -- # return 0 00:07:19.704 05:22:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:19.704 05:22:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:19.704 05:22:22 -- common/autotest_common.sh@10 -- # set +x 00:07:19.704 05:22:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:19.704 05:22:22 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:19.704 05:22:22 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:19.704 05:22:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.704 05:22:22 -- common/autotest_common.sh@10 -- # set +x 00:07:19.704 [2024-12-07 05:22:22.843205] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.704 05:22:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.704 05:22:22 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:19.704 05:22:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.705 05:22:22 -- common/autotest_common.sh@10 -- # set +x 00:07:19.705 Malloc1 00:07:19.705 05:22:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.965 05:22:22 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:19.965 05:22:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.965 05:22:22 -- common/autotest_common.sh@10 -- # set +x 00:07:19.965 05:22:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.965 05:22:22 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:19.965 05:22:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.965 05:22:22 -- common/autotest_common.sh@10 -- # set +x 00:07:19.965 05:22:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.965 05:22:22 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:19.965 05:22:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.965 05:22:22 -- common/autotest_common.sh@10 -- # set +x 00:07:19.965 [2024-12-07 05:22:22.972312] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:19.965 05:22:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.965 05:22:22 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:19.965 05:22:22 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:07:19.965 05:22:22 -- common/autotest_common.sh@1368 -- # local bdev_info 00:07:19.965 05:22:22 -- common/autotest_common.sh@1369 -- # local bs 00:07:19.965 05:22:22 -- common/autotest_common.sh@1370 -- # local nb 00:07:19.965 05:22:22 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:19.965 05:22:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.965 05:22:22 -- common/autotest_common.sh@10 -- # set +x 00:07:19.965 05:22:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.965 05:22:22 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:07:19.965 { 00:07:19.965 "name": "Malloc1", 00:07:19.965 "aliases": [ 00:07:19.965 "8d48793c-cb41-402b-b02c-a0c7b4562811" 00:07:19.965 ], 00:07:19.965 "product_name": "Malloc disk", 00:07:19.965 "block_size": 512, 00:07:19.965 "num_blocks": 1048576, 00:07:19.965 "uuid": "8d48793c-cb41-402b-b02c-a0c7b4562811", 00:07:19.965 "assigned_rate_limits": { 00:07:19.965 "rw_ios_per_sec": 0, 00:07:19.965 "rw_mbytes_per_sec": 0, 00:07:19.965 "r_mbytes_per_sec": 0, 00:07:19.965 "w_mbytes_per_sec": 0 00:07:19.965 }, 00:07:19.965 "claimed": true, 00:07:19.965 "claim_type": "exclusive_write", 00:07:19.965 "zoned": false, 00:07:19.965 "supported_io_types": { 00:07:19.965 "read": true, 00:07:19.965 "write": true, 00:07:19.965 "unmap": true, 00:07:19.965 "write_zeroes": true, 00:07:19.965 "flush": true, 00:07:19.965 "reset": true, 00:07:19.965 "compare": false, 00:07:19.965 "compare_and_write": false, 00:07:19.965 "abort": true, 00:07:19.965 "nvme_admin": false, 00:07:19.965 "nvme_io": false 00:07:19.965 }, 00:07:19.965 "memory_domains": [ 00:07:19.965 { 00:07:19.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.965 "dma_device_type": 2 00:07:19.965 } 00:07:19.965 ], 00:07:19.965 "driver_specific": {} 00:07:19.965 } 00:07:19.965 ]' 00:07:19.965 05:22:23 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:07:19.965 05:22:23 -- common/autotest_common.sh@1372 -- # bs=512 00:07:19.965 05:22:23 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:07:19.965 05:22:23 -- common/autotest_common.sh@1373 -- # nb=1048576 00:07:19.965 05:22:23 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:07:19.965 05:22:23 -- common/autotest_common.sh@1377 -- # echo 512 00:07:19.965 05:22:23 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:19.965 05:22:23 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:21.881 05:22:24 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:21.881 05:22:24 -- common/autotest_common.sh@1187 -- # local i=0 00:07:21.881 05:22:24 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:07:21.881 05:22:24 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:07:21.881 05:22:24 -- common/autotest_common.sh@1194 -- # sleep 2 00:07:23.794 05:22:26 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:07:23.794 05:22:26 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:07:23.794 05:22:26 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:07:23.794 05:22:26 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:07:23.794 05:22:26 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:07:23.794 05:22:26 -- common/autotest_common.sh@1197 -- # return 0 00:07:23.794 05:22:26 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:23.794 05:22:26 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:23.794 05:22:26 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:23.794 05:22:26 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:23.794 05:22:26 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:23.794 05:22:26 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:23.794 05:22:26 -- setup/common.sh@80 -- # echo 536870912 00:07:23.794 05:22:26 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:23.794 05:22:26 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:23.794 05:22:26 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:23.794 05:22:26 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:23.794 05:22:26 -- target/filesystem.sh@69 -- # partprobe 00:07:23.794 05:22:27 -- target/filesystem.sh@70 -- # sleep 1 00:07:25.350 05:22:28 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:25.350 05:22:28 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:25.350 05:22:28 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:25.350 05:22:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.350 05:22:28 -- common/autotest_common.sh@10 -- # set +x 00:07:25.350 ************************************ 00:07:25.350 START TEST filesystem_ext4 00:07:25.350 ************************************ 00:07:25.350 05:22:28 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:25.350 05:22:28 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:25.350 05:22:28 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:25.350 05:22:28 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:25.350 05:22:28 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:25.350 05:22:28 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:25.350 05:22:28 -- common/autotest_common.sh@914 -- # local i=0 00:07:25.350 05:22:28 -- common/autotest_common.sh@915 -- # local force 00:07:25.350 05:22:28 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:25.350 05:22:28 -- common/autotest_common.sh@918 -- # force=-F 00:07:25.350 05:22:28 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:25.350 mke2fs 1.47.0 (5-Feb-2023) 00:07:25.350 Discarding device blocks: 0/522240 done 00:07:25.350 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:25.350 Filesystem UUID: cd40c838-4be0-49f1-972f-8a929fee6c81 00:07:25.350 Superblock backups stored on blocks: 00:07:25.350 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:25.350 00:07:25.350 Allocating group tables: 0/64 done 00:07:25.350 Writing inode tables: 0/64 done 00:07:25.350 Creating journal (8192 blocks): done 00:07:25.350 Writing superblocks and filesystem accounting information: 0/64 done 00:07:25.350 00:07:25.350 05:22:28 -- common/autotest_common.sh@931 -- # return 0 00:07:25.350 05:22:28 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:31.947 05:22:33 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:31.947 05:22:34 -- target/filesystem.sh@25 -- # sync 00:07:31.947 05:22:34 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:31.947 05:22:34 -- target/filesystem.sh@27 -- # sync 00:07:31.947 05:22:34 -- target/filesystem.sh@29 -- # i=0 00:07:31.947 05:22:34 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:31.947 05:22:34 -- target/filesystem.sh@37 -- # kill -0 1637700 00:07:31.947 05:22:34 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:31.947 05:22:34 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:31.947 05:22:34 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:31.947 05:22:34 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:31.947 00:07:31.947 real 0m6.095s 00:07:31.947 user 0m0.021s 00:07:31.947 sys 0m0.083s 00:07:31.947 05:22:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:31.947 05:22:34 -- common/autotest_common.sh@10 -- # set +x 00:07:31.947 ************************************ 00:07:31.947 END TEST filesystem_ext4 00:07:31.947 ************************************ 00:07:31.947 05:22:34 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:31.947 05:22:34 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:31.947 05:22:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:31.947 05:22:34 -- common/autotest_common.sh@10 -- # set +x 00:07:31.947 ************************************ 00:07:31.947 START TEST filesystem_btrfs 00:07:31.947 ************************************ 00:07:31.947 05:22:34 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:31.947 05:22:34 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:31.947 05:22:34 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:31.947 05:22:34 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:31.947 05:22:34 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:31.947 05:22:34 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:31.947 05:22:34 -- common/autotest_common.sh@914 -- # local i=0 00:07:31.947 05:22:34 -- common/autotest_common.sh@915 -- # local force 00:07:31.947 05:22:34 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:31.947 05:22:34 -- common/autotest_common.sh@920 -- # force=-f 00:07:31.947 05:22:34 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:31.947 btrfs-progs v6.8.1 00:07:31.947 See https://btrfs.readthedocs.io for more information. 00:07:31.947 00:07:31.947 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:31.947 NOTE: several default settings have changed in version 5.15, please make sure 00:07:31.947 this does not affect your deployments: 00:07:31.947 - DUP for metadata (-m dup) 00:07:31.947 - enabled no-holes (-O no-holes) 00:07:31.947 - enabled free-space-tree (-R free-space-tree) 00:07:31.947 00:07:31.947 Label: (null) 00:07:31.947 UUID: c1f3f8e7-8f63-4dbb-bb47-d836a78480e6 00:07:31.947 Node size: 16384 00:07:31.947 Sector size: 4096 (CPU page size: 4096) 00:07:31.947 Filesystem size: 510.00MiB 00:07:31.947 Block group profiles: 00:07:31.947 Data: single 8.00MiB 00:07:31.947 Metadata: DUP 32.00MiB 00:07:31.947 System: DUP 8.00MiB 00:07:31.947 SSD detected: yes 00:07:31.947 Zoned device: no 00:07:31.947 Features: extref, skinny-metadata, no-holes, free-space-tree 00:07:31.947 Checksum: crc32c 00:07:31.947 Number of devices: 1 00:07:31.947 Devices: 00:07:31.947 ID SIZE PATH 00:07:31.947 1 510.00MiB /dev/nvme0n1p1 00:07:31.947 00:07:31.947 05:22:34 -- common/autotest_common.sh@931 -- # return 0 00:07:31.947 05:22:34 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:31.947 05:22:34 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:31.947 05:22:34 -- target/filesystem.sh@25 -- # sync 00:07:31.947 05:22:34 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:31.947 05:22:34 -- target/filesystem.sh@27 -- # sync 00:07:31.947 05:22:34 -- target/filesystem.sh@29 -- # i=0 00:07:31.947 05:22:34 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:31.947 05:22:34 -- target/filesystem.sh@37 -- # kill -0 1637700 00:07:31.947 05:22:34 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:31.948 05:22:34 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:31.948 05:22:34 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:31.948 05:22:34 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:31.948 00:07:31.948 real 0m0.360s 00:07:31.948 user 0m0.032s 00:07:31.948 sys 0m0.109s 00:07:31.948 05:22:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:31.948 05:22:34 -- common/autotest_common.sh@10 -- # set +x 00:07:31.948 ************************************ 00:07:31.948 END TEST filesystem_btrfs 00:07:31.948 ************************************ 00:07:31.948 05:22:34 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:31.948 05:22:34 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:31.948 05:22:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:31.948 05:22:34 -- common/autotest_common.sh@10 -- # set +x 00:07:31.948 ************************************ 00:07:31.948 START TEST filesystem_xfs 00:07:31.948 ************************************ 00:07:31.948 05:22:34 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:07:31.948 05:22:34 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:31.948 05:22:34 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:31.948 05:22:34 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:31.948 05:22:34 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:31.948 05:22:34 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:31.948 05:22:34 -- common/autotest_common.sh@914 -- # local i=0 00:07:31.948 05:22:34 -- common/autotest_common.sh@915 -- # local force 00:07:31.948 05:22:34 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:31.948 05:22:34 -- common/autotest_common.sh@920 -- # force=-f 00:07:31.948 05:22:34 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:31.948 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:31.948 = sectsz=512 attr=2, projid32bit=1 00:07:31.948 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:31.948 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:31.948 data = bsize=4096 blocks=130560, imaxpct=25 00:07:31.948 = sunit=0 swidth=0 blks 00:07:31.948 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:31.948 log =internal log bsize=4096 blocks=16384, version=2 00:07:31.948 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:31.948 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:32.518 Discarding blocks...Done. 00:07:32.518 05:22:35 -- common/autotest_common.sh@931 -- # return 0 00:07:32.518 05:22:35 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:34.429 05:22:37 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:34.429 05:22:37 -- target/filesystem.sh@25 -- # sync 00:07:34.429 05:22:37 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:34.429 05:22:37 -- target/filesystem.sh@27 -- # sync 00:07:34.429 05:22:37 -- target/filesystem.sh@29 -- # i=0 00:07:34.429 05:22:37 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:34.429 05:22:37 -- target/filesystem.sh@37 -- # kill -0 1637700 00:07:34.429 05:22:37 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:34.429 05:22:37 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:34.429 05:22:37 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:34.429 05:22:37 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:34.429 00:07:34.429 real 0m3.043s 00:07:34.429 user 0m0.024s 00:07:34.429 sys 0m0.082s 00:07:34.429 05:22:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:34.429 05:22:37 -- common/autotest_common.sh@10 -- # set +x 00:07:34.429 ************************************ 00:07:34.429 END TEST filesystem_xfs 00:07:34.429 ************************************ 00:07:34.429 05:22:37 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:34.692 05:22:37 -- target/filesystem.sh@93 -- # sync 00:07:34.692 05:22:37 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:34.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:34.692 05:22:37 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:34.692 05:22:37 -- common/autotest_common.sh@1208 -- # local i=0 00:07:34.692 05:22:37 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:07:34.692 05:22:37 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:34.692 05:22:37 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:07:34.692 05:22:37 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:34.692 05:22:37 -- common/autotest_common.sh@1220 -- # return 0 00:07:34.692 05:22:37 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:34.692 05:22:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.692 05:22:37 -- common/autotest_common.sh@10 -- # set +x 00:07:34.692 05:22:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.692 05:22:37 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:34.692 05:22:37 -- target/filesystem.sh@101 -- # killprocess 1637700 00:07:34.692 05:22:37 -- common/autotest_common.sh@936 -- # '[' -z 1637700 ']' 00:07:34.692 05:22:37 -- common/autotest_common.sh@940 -- # kill -0 1637700 00:07:34.692 05:22:37 -- common/autotest_common.sh@941 -- # uname 00:07:34.692 05:22:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:34.692 05:22:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1637700 00:07:34.692 05:22:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:34.692 05:22:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:34.692 05:22:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1637700' 00:07:34.692 killing process with pid 1637700 00:07:34.692 05:22:37 -- common/autotest_common.sh@955 -- # kill 1637700 00:07:34.692 05:22:37 -- common/autotest_common.sh@960 -- # wait 1637700 00:07:34.955 05:22:38 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:34.955 00:07:34.955 real 0m16.210s 00:07:34.955 user 1m3.898s 00:07:34.955 sys 0m1.270s 00:07:34.955 05:22:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:34.955 05:22:38 -- common/autotest_common.sh@10 -- # set +x 00:07:34.955 ************************************ 00:07:34.955 END TEST nvmf_filesystem_no_in_capsule 00:07:34.955 ************************************ 00:07:34.955 05:22:38 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:34.955 05:22:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:34.955 05:22:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:34.955 05:22:38 -- common/autotest_common.sh@10 -- # set +x 00:07:35.216 ************************************ 00:07:35.216 START TEST nvmf_filesystem_in_capsule 00:07:35.216 ************************************ 00:07:35.216 05:22:38 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 4096 00:07:35.216 05:22:38 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:35.216 05:22:38 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:35.216 05:22:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:35.216 05:22:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:35.216 05:22:38 -- common/autotest_common.sh@10 -- # set +x 00:07:35.217 05:22:38 -- nvmf/common.sh@469 -- # nvmfpid=1641238 00:07:35.217 05:22:38 -- nvmf/common.sh@470 -- # waitforlisten 1641238 00:07:35.217 05:22:38 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:35.217 05:22:38 -- common/autotest_common.sh@829 -- # '[' -z 1641238 ']' 00:07:35.217 05:22:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.217 05:22:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:35.217 05:22:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.217 05:22:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:35.217 05:22:38 -- common/autotest_common.sh@10 -- # set +x 00:07:35.217 [2024-12-07 05:22:38.255592] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:35.217 [2024-12-07 05:22:38.255650] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.217 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.217 [2024-12-07 05:22:38.324465] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:35.217 [2024-12-07 05:22:38.393367] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:35.217 [2024-12-07 05:22:38.393500] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:35.217 [2024-12-07 05:22:38.393511] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:35.217 [2024-12-07 05:22:38.393521] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:35.217 [2024-12-07 05:22:38.393661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.217 [2024-12-07 05:22:38.393784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.217 [2024-12-07 05:22:38.393910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.217 [2024-12-07 05:22:38.393910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.159 05:22:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:36.159 05:22:39 -- common/autotest_common.sh@862 -- # return 0 00:07:36.159 05:22:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:36.159 05:22:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:36.159 05:22:39 -- common/autotest_common.sh@10 -- # set +x 00:07:36.159 05:22:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.159 05:22:39 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:36.159 05:22:39 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:36.159 05:22:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.159 05:22:39 -- common/autotest_common.sh@10 -- # set +x 00:07:36.159 [2024-12-07 05:22:39.089240] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.159 05:22:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.159 05:22:39 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:36.159 05:22:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.159 05:22:39 -- common/autotest_common.sh@10 -- # set +x 00:07:36.159 Malloc1 00:07:36.159 05:22:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.159 05:22:39 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:36.159 05:22:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.159 05:22:39 -- common/autotest_common.sh@10 -- # set +x 00:07:36.159 05:22:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.159 05:22:39 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:36.159 05:22:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.159 05:22:39 -- common/autotest_common.sh@10 -- # set +x 00:07:36.159 05:22:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.159 05:22:39 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:36.159 05:22:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.159 05:22:39 -- common/autotest_common.sh@10 -- # set +x 00:07:36.159 [2024-12-07 05:22:39.217038] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.159 05:22:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.159 05:22:39 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:36.159 05:22:39 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:07:36.160 05:22:39 -- common/autotest_common.sh@1368 -- # local bdev_info 00:07:36.160 05:22:39 -- common/autotest_common.sh@1369 -- # local bs 00:07:36.160 05:22:39 -- common/autotest_common.sh@1370 -- # local nb 00:07:36.160 05:22:39 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:36.160 05:22:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.160 05:22:39 -- common/autotest_common.sh@10 -- # set +x 00:07:36.160 05:22:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.160 05:22:39 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:07:36.160 { 00:07:36.160 "name": "Malloc1", 00:07:36.160 "aliases": [ 00:07:36.160 "28f18718-fae9-4734-8335-d6944b600f7e" 00:07:36.160 ], 00:07:36.160 "product_name": "Malloc disk", 00:07:36.160 "block_size": 512, 00:07:36.160 "num_blocks": 1048576, 00:07:36.160 "uuid": "28f18718-fae9-4734-8335-d6944b600f7e", 00:07:36.160 "assigned_rate_limits": { 00:07:36.160 "rw_ios_per_sec": 0, 00:07:36.160 "rw_mbytes_per_sec": 0, 00:07:36.160 "r_mbytes_per_sec": 0, 00:07:36.160 "w_mbytes_per_sec": 0 00:07:36.160 }, 00:07:36.160 "claimed": true, 00:07:36.160 "claim_type": "exclusive_write", 00:07:36.160 "zoned": false, 00:07:36.160 "supported_io_types": { 00:07:36.160 "read": true, 00:07:36.160 "write": true, 00:07:36.160 "unmap": true, 00:07:36.160 "write_zeroes": true, 00:07:36.160 "flush": true, 00:07:36.160 "reset": true, 00:07:36.160 "compare": false, 00:07:36.160 "compare_and_write": false, 00:07:36.160 "abort": true, 00:07:36.160 "nvme_admin": false, 00:07:36.160 "nvme_io": false 00:07:36.160 }, 00:07:36.160 "memory_domains": [ 00:07:36.160 { 00:07:36.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.160 "dma_device_type": 2 00:07:36.160 } 00:07:36.160 ], 00:07:36.160 "driver_specific": {} 00:07:36.160 } 00:07:36.160 ]' 00:07:36.160 05:22:39 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:07:36.160 05:22:39 -- common/autotest_common.sh@1372 -- # bs=512 00:07:36.160 05:22:39 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:07:36.160 05:22:39 -- common/autotest_common.sh@1373 -- # nb=1048576 00:07:36.160 05:22:39 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:07:36.160 05:22:39 -- common/autotest_common.sh@1377 -- # echo 512 00:07:36.160 05:22:39 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:36.160 05:22:39 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:38.074 05:22:40 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:38.074 05:22:40 -- common/autotest_common.sh@1187 -- # local i=0 00:07:38.074 05:22:40 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:07:38.074 05:22:40 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:07:38.074 05:22:40 -- common/autotest_common.sh@1194 -- # sleep 2 00:07:39.984 05:22:42 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:07:39.984 05:22:42 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:07:39.984 05:22:42 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:07:39.984 05:22:42 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:07:39.984 05:22:42 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:07:39.984 05:22:42 -- common/autotest_common.sh@1197 -- # return 0 00:07:39.984 05:22:42 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:39.984 05:22:42 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:39.984 05:22:42 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:39.984 05:22:42 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:39.984 05:22:42 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:39.984 05:22:42 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:39.984 05:22:42 -- setup/common.sh@80 -- # echo 536870912 00:07:39.984 05:22:42 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:39.984 05:22:42 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:39.984 05:22:42 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:39.984 05:22:42 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:39.984 05:22:43 -- target/filesystem.sh@69 -- # partprobe 00:07:40.245 05:22:43 -- target/filesystem.sh@70 -- # sleep 1 00:07:41.187 05:22:44 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:41.187 05:22:44 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:41.187 05:22:44 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:41.187 05:22:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:41.187 05:22:44 -- common/autotest_common.sh@10 -- # set +x 00:07:41.187 ************************************ 00:07:41.187 START TEST filesystem_in_capsule_ext4 00:07:41.187 ************************************ 00:07:41.187 05:22:44 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:41.187 05:22:44 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:41.187 05:22:44 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:41.187 05:22:44 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:41.187 05:22:44 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:41.187 05:22:44 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:41.187 05:22:44 -- common/autotest_common.sh@914 -- # local i=0 00:07:41.187 05:22:44 -- common/autotest_common.sh@915 -- # local force 00:07:41.187 05:22:44 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:41.187 05:22:44 -- common/autotest_common.sh@918 -- # force=-F 00:07:41.187 05:22:44 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:41.187 mke2fs 1.47.0 (5-Feb-2023) 00:07:41.187 Discarding device blocks: 0/522240 done 00:07:41.187 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:41.187 Filesystem UUID: ec657d0c-1a60-456b-a8c7-6e22eab3c4b9 00:07:41.187 Superblock backups stored on blocks: 00:07:41.187 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:41.187 00:07:41.187 Allocating group tables: 0/64 done 00:07:41.187 Writing inode tables: 0/64 done 00:07:41.449 Creating journal (8192 blocks): done 00:07:41.449 Writing superblocks and filesystem accounting information: 0/64 done 00:07:41.449 00:07:41.449 05:22:44 -- common/autotest_common.sh@931 -- # return 0 00:07:41.449 05:22:44 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:48.039 05:22:49 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:48.039 05:22:50 -- target/filesystem.sh@25 -- # sync 00:07:48.039 05:22:50 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:48.039 05:22:50 -- target/filesystem.sh@27 -- # sync 00:07:48.039 05:22:50 -- target/filesystem.sh@29 -- # i=0 00:07:48.039 05:22:50 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:48.039 05:22:50 -- target/filesystem.sh@37 -- # kill -0 1641238 00:07:48.039 05:22:50 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:48.039 05:22:50 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:48.039 05:22:50 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:48.039 05:22:50 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:48.039 00:07:48.039 real 0m5.803s 00:07:48.039 user 0m0.018s 00:07:48.039 sys 0m0.086s 00:07:48.039 05:22:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:48.039 05:22:50 -- common/autotest_common.sh@10 -- # set +x 00:07:48.039 ************************************ 00:07:48.039 END TEST filesystem_in_capsule_ext4 00:07:48.039 ************************************ 00:07:48.039 05:22:50 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:48.039 05:22:50 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:48.039 05:22:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:48.039 05:22:50 -- common/autotest_common.sh@10 -- # set +x 00:07:48.039 ************************************ 00:07:48.039 START TEST filesystem_in_capsule_btrfs 00:07:48.039 ************************************ 00:07:48.039 05:22:50 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:48.039 05:22:50 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:48.039 05:22:50 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:48.039 05:22:50 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:48.039 05:22:50 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:48.039 05:22:50 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:48.039 05:22:50 -- common/autotest_common.sh@914 -- # local i=0 00:07:48.039 05:22:50 -- common/autotest_common.sh@915 -- # local force 00:07:48.039 05:22:50 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:48.039 05:22:50 -- common/autotest_common.sh@920 -- # force=-f 00:07:48.039 05:22:50 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:48.039 btrfs-progs v6.8.1 00:07:48.039 See https://btrfs.readthedocs.io for more information. 00:07:48.039 00:07:48.039 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:48.039 NOTE: several default settings have changed in version 5.15, please make sure 00:07:48.039 this does not affect your deployments: 00:07:48.039 - DUP for metadata (-m dup) 00:07:48.039 - enabled no-holes (-O no-holes) 00:07:48.039 - enabled free-space-tree (-R free-space-tree) 00:07:48.039 00:07:48.039 Label: (null) 00:07:48.039 UUID: 3af510ef-3d68-4df8-b5f8-b60f5ddcfca6 00:07:48.039 Node size: 16384 00:07:48.039 Sector size: 4096 (CPU page size: 4096) 00:07:48.039 Filesystem size: 510.00MiB 00:07:48.039 Block group profiles: 00:07:48.039 Data: single 8.00MiB 00:07:48.039 Metadata: DUP 32.00MiB 00:07:48.039 System: DUP 8.00MiB 00:07:48.039 SSD detected: yes 00:07:48.039 Zoned device: no 00:07:48.039 Features: extref, skinny-metadata, no-holes, free-space-tree 00:07:48.039 Checksum: crc32c 00:07:48.039 Number of devices: 1 00:07:48.039 Devices: 00:07:48.039 ID SIZE PATH 00:07:48.039 1 510.00MiB /dev/nvme0n1p1 00:07:48.039 00:07:48.039 05:22:50 -- common/autotest_common.sh@931 -- # return 0 00:07:48.039 05:22:50 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:48.039 05:22:50 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:48.039 05:22:50 -- target/filesystem.sh@25 -- # sync 00:07:48.039 05:22:50 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:48.039 05:22:50 -- target/filesystem.sh@27 -- # sync 00:07:48.039 05:22:50 -- target/filesystem.sh@29 -- # i=0 00:07:48.039 05:22:50 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:48.039 05:22:50 -- target/filesystem.sh@37 -- # kill -0 1641238 00:07:48.039 05:22:50 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:48.039 05:22:50 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:48.039 05:22:50 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:48.039 05:22:50 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:48.039 00:07:48.039 real 0m0.448s 00:07:48.039 user 0m0.031s 00:07:48.039 sys 0m0.120s 00:07:48.039 05:22:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:48.039 05:22:50 -- common/autotest_common.sh@10 -- # set +x 00:07:48.039 ************************************ 00:07:48.039 END TEST filesystem_in_capsule_btrfs 00:07:48.039 ************************************ 00:07:48.039 05:22:50 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:48.039 05:22:50 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:48.039 05:22:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:48.039 05:22:50 -- common/autotest_common.sh@10 -- # set +x 00:07:48.039 ************************************ 00:07:48.039 START TEST filesystem_in_capsule_xfs 00:07:48.039 ************************************ 00:07:48.039 05:22:50 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:07:48.039 05:22:50 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:48.039 05:22:50 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:48.039 05:22:50 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:48.039 05:22:50 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:48.039 05:22:50 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:48.039 05:22:50 -- common/autotest_common.sh@914 -- # local i=0 00:07:48.039 05:22:50 -- common/autotest_common.sh@915 -- # local force 00:07:48.039 05:22:50 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:48.039 05:22:50 -- common/autotest_common.sh@920 -- # force=-f 00:07:48.039 05:22:50 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:48.039 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:48.039 = sectsz=512 attr=2, projid32bit=1 00:07:48.039 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:48.039 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:48.039 data = bsize=4096 blocks=130560, imaxpct=25 00:07:48.039 = sunit=0 swidth=0 blks 00:07:48.039 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:48.039 log =internal log bsize=4096 blocks=16384, version=2 00:07:48.039 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:48.039 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:48.300 Discarding blocks...Done. 00:07:48.300 05:22:51 -- common/autotest_common.sh@931 -- # return 0 00:07:48.300 05:22:51 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:50.219 05:22:53 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:50.219 05:22:53 -- target/filesystem.sh@25 -- # sync 00:07:50.219 05:22:53 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:50.219 05:22:53 -- target/filesystem.sh@27 -- # sync 00:07:50.219 05:22:53 -- target/filesystem.sh@29 -- # i=0 00:07:50.219 05:22:53 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:50.219 05:22:53 -- target/filesystem.sh@37 -- # kill -0 1641238 00:07:50.219 05:22:53 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:50.219 05:22:53 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:50.219 05:22:53 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:50.219 05:22:53 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:50.219 00:07:50.219 real 0m2.789s 00:07:50.219 user 0m0.025s 00:07:50.219 sys 0m0.079s 00:07:50.219 05:22:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:50.219 05:22:53 -- common/autotest_common.sh@10 -- # set +x 00:07:50.219 ************************************ 00:07:50.219 END TEST filesystem_in_capsule_xfs 00:07:50.219 ************************************ 00:07:50.480 05:22:53 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:50.742 05:22:53 -- target/filesystem.sh@93 -- # sync 00:07:50.742 05:22:53 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:50.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:50.742 05:22:53 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:50.742 05:22:53 -- common/autotest_common.sh@1208 -- # local i=0 00:07:50.742 05:22:53 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:07:50.742 05:22:53 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:50.742 05:22:53 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:07:50.742 05:22:53 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:50.742 05:22:53 -- common/autotest_common.sh@1220 -- # return 0 00:07:50.742 05:22:53 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:50.742 05:22:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.742 05:22:53 -- common/autotest_common.sh@10 -- # set +x 00:07:50.742 05:22:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.742 05:22:53 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:50.742 05:22:53 -- target/filesystem.sh@101 -- # killprocess 1641238 00:07:50.742 05:22:53 -- common/autotest_common.sh@936 -- # '[' -z 1641238 ']' 00:07:50.742 05:22:53 -- common/autotest_common.sh@940 -- # kill -0 1641238 00:07:50.742 05:22:53 -- common/autotest_common.sh@941 -- # uname 00:07:50.742 05:22:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:50.742 05:22:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1641238 00:07:50.742 05:22:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:50.742 05:22:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:50.742 05:22:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1641238' 00:07:50.742 killing process with pid 1641238 00:07:50.742 05:22:53 -- common/autotest_common.sh@955 -- # kill 1641238 00:07:50.742 05:22:53 -- common/autotest_common.sh@960 -- # wait 1641238 00:07:51.003 05:22:54 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:51.003 00:07:51.003 real 0m15.989s 00:07:51.003 user 1m3.058s 00:07:51.003 sys 0m1.282s 00:07:51.003 05:22:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:51.003 05:22:54 -- common/autotest_common.sh@10 -- # set +x 00:07:51.003 ************************************ 00:07:51.003 END TEST nvmf_filesystem_in_capsule 00:07:51.003 ************************************ 00:07:51.003 05:22:54 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:51.003 05:22:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:51.003 05:22:54 -- nvmf/common.sh@116 -- # sync 00:07:51.003 05:22:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:51.003 05:22:54 -- nvmf/common.sh@119 -- # set +e 00:07:51.003 05:22:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:51.003 05:22:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:51.003 rmmod nvme_tcp 00:07:51.263 rmmod nvme_fabrics 00:07:51.263 rmmod nvme_keyring 00:07:51.263 05:22:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:51.263 05:22:54 -- nvmf/common.sh@123 -- # set -e 00:07:51.263 05:22:54 -- nvmf/common.sh@124 -- # return 0 00:07:51.263 05:22:54 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:07:51.263 05:22:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:51.263 05:22:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:51.263 05:22:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:51.263 05:22:54 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:51.263 05:22:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:51.263 05:22:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.264 05:22:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:51.264 05:22:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.177 05:22:56 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:53.177 00:07:53.177 real 0m42.270s 00:07:53.177 user 2m9.267s 00:07:53.177 sys 0m8.234s 00:07:53.178 05:22:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:53.178 05:22:56 -- common/autotest_common.sh@10 -- # set +x 00:07:53.178 ************************************ 00:07:53.178 END TEST nvmf_filesystem 00:07:53.178 ************************************ 00:07:53.178 05:22:56 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:53.178 05:22:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:53.178 05:22:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:53.178 05:22:56 -- common/autotest_common.sh@10 -- # set +x 00:07:53.438 ************************************ 00:07:53.438 START TEST nvmf_discovery 00:07:53.438 ************************************ 00:07:53.438 05:22:56 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:53.438 * Looking for test storage... 00:07:53.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:53.438 05:22:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:53.438 05:22:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:53.438 05:22:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:53.438 05:22:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:53.438 05:22:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:53.438 05:22:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:53.438 05:22:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:53.438 05:22:56 -- scripts/common.sh@335 -- # IFS=.-: 00:07:53.438 05:22:56 -- scripts/common.sh@335 -- # read -ra ver1 00:07:53.438 05:22:56 -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.438 05:22:56 -- scripts/common.sh@336 -- # read -ra ver2 00:07:53.438 05:22:56 -- scripts/common.sh@337 -- # local 'op=<' 00:07:53.438 05:22:56 -- scripts/common.sh@339 -- # ver1_l=2 00:07:53.438 05:22:56 -- scripts/common.sh@340 -- # ver2_l=1 00:07:53.438 05:22:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:53.438 05:22:56 -- scripts/common.sh@343 -- # case "$op" in 00:07:53.438 05:22:56 -- scripts/common.sh@344 -- # : 1 00:07:53.438 05:22:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:53.438 05:22:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.438 05:22:56 -- scripts/common.sh@364 -- # decimal 1 00:07:53.438 05:22:56 -- scripts/common.sh@352 -- # local d=1 00:07:53.438 05:22:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.438 05:22:56 -- scripts/common.sh@354 -- # echo 1 00:07:53.438 05:22:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:53.438 05:22:56 -- scripts/common.sh@365 -- # decimal 2 00:07:53.438 05:22:56 -- scripts/common.sh@352 -- # local d=2 00:07:53.438 05:22:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.438 05:22:56 -- scripts/common.sh@354 -- # echo 2 00:07:53.438 05:22:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:53.438 05:22:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:53.438 05:22:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:53.438 05:22:56 -- scripts/common.sh@367 -- # return 0 00:07:53.438 05:22:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.438 05:22:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:53.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.438 --rc genhtml_branch_coverage=1 00:07:53.438 --rc genhtml_function_coverage=1 00:07:53.438 --rc genhtml_legend=1 00:07:53.438 --rc geninfo_all_blocks=1 00:07:53.438 --rc geninfo_unexecuted_blocks=1 00:07:53.438 00:07:53.438 ' 00:07:53.438 05:22:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:53.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.438 --rc genhtml_branch_coverage=1 00:07:53.438 --rc genhtml_function_coverage=1 00:07:53.438 --rc genhtml_legend=1 00:07:53.438 --rc geninfo_all_blocks=1 00:07:53.438 --rc geninfo_unexecuted_blocks=1 00:07:53.438 00:07:53.438 ' 00:07:53.438 05:22:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:53.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.438 --rc genhtml_branch_coverage=1 00:07:53.438 --rc genhtml_function_coverage=1 00:07:53.438 --rc genhtml_legend=1 00:07:53.438 --rc geninfo_all_blocks=1 00:07:53.438 --rc geninfo_unexecuted_blocks=1 00:07:53.438 00:07:53.438 ' 00:07:53.438 05:22:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:53.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.438 --rc genhtml_branch_coverage=1 00:07:53.439 --rc genhtml_function_coverage=1 00:07:53.439 --rc genhtml_legend=1 00:07:53.439 --rc geninfo_all_blocks=1 00:07:53.439 --rc geninfo_unexecuted_blocks=1 00:07:53.439 00:07:53.439 ' 00:07:53.439 05:22:56 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:53.439 05:22:56 -- nvmf/common.sh@7 -- # uname -s 00:07:53.439 05:22:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.439 05:22:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.439 05:22:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.439 05:22:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.439 05:22:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.439 05:22:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.439 05:22:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.439 05:22:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.439 05:22:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.439 05:22:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.439 05:22:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:53.439 05:22:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:53.439 05:22:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.439 05:22:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.439 05:22:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:53.439 05:22:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:53.439 05:22:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.439 05:22:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.439 05:22:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.439 05:22:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.439 05:22:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.439 05:22:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.439 05:22:56 -- paths/export.sh@5 -- # export PATH 00:07:53.439 05:22:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.439 05:22:56 -- nvmf/common.sh@46 -- # : 0 00:07:53.439 05:22:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:53.439 05:22:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:53.439 05:22:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:53.439 05:22:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.439 05:22:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.439 05:22:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:53.439 05:22:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:53.439 05:22:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:53.439 05:22:56 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:53.439 05:22:56 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:53.439 05:22:56 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:53.439 05:22:56 -- target/discovery.sh@15 -- # hash nvme 00:07:53.439 05:22:56 -- target/discovery.sh@20 -- # nvmftestinit 00:07:53.439 05:22:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:53.439 05:22:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.439 05:22:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:53.439 05:22:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:53.439 05:22:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:53.439 05:22:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.439 05:22:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:53.439 05:22:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.439 05:22:56 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:53.439 05:22:56 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:53.439 05:22:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:53.439 05:22:56 -- common/autotest_common.sh@10 -- # set +x 00:08:01.578 05:23:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:01.578 05:23:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:01.578 05:23:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:01.578 05:23:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:01.578 05:23:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:01.578 05:23:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:01.578 05:23:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:01.578 05:23:03 -- nvmf/common.sh@294 -- # net_devs=() 00:08:01.578 05:23:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:01.578 05:23:03 -- nvmf/common.sh@295 -- # e810=() 00:08:01.578 05:23:03 -- nvmf/common.sh@295 -- # local -ga e810 00:08:01.578 05:23:03 -- nvmf/common.sh@296 -- # x722=() 00:08:01.578 05:23:03 -- nvmf/common.sh@296 -- # local -ga x722 00:08:01.578 05:23:03 -- nvmf/common.sh@297 -- # mlx=() 00:08:01.578 05:23:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:01.579 05:23:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:01.579 05:23:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:01.579 05:23:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:01.579 05:23:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:01.579 05:23:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:01.579 05:23:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:01.579 05:23:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:01.579 05:23:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:01.579 05:23:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:01.579 05:23:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:01.579 05:23:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:01.579 05:23:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:01.579 05:23:03 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:01.579 05:23:03 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:01.579 05:23:03 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:01.579 05:23:03 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:01.579 05:23:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:01.579 05:23:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:01.579 05:23:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:01.579 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:01.579 05:23:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:01.579 05:23:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:01.579 05:23:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.579 05:23:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.579 05:23:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:01.579 05:23:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:01.579 05:23:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:01.579 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:01.579 05:23:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:01.579 05:23:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:01.579 05:23:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.579 05:23:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.579 05:23:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:01.579 05:23:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:01.579 05:23:03 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:01.579 05:23:03 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:01.579 05:23:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:01.579 05:23:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.579 05:23:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:01.579 05:23:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.579 05:23:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:01.579 Found net devices under 0000:31:00.0: cvl_0_0 00:08:01.579 05:23:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.579 05:23:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:01.579 05:23:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.579 05:23:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:01.579 05:23:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.579 05:23:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:01.579 Found net devices under 0000:31:00.1: cvl_0_1 00:08:01.579 05:23:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.579 05:23:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:01.579 05:23:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:01.579 05:23:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:01.579 05:23:03 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:01.579 05:23:03 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:01.579 05:23:03 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:01.579 05:23:03 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:01.579 05:23:03 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:01.579 05:23:03 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:01.579 05:23:03 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:01.579 05:23:03 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:01.579 05:23:03 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:01.579 05:23:03 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:01.579 05:23:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:01.579 05:23:03 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:01.579 05:23:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:01.579 05:23:03 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:01.579 05:23:03 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:01.579 05:23:03 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:01.579 05:23:03 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:01.579 05:23:03 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:01.579 05:23:03 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:01.579 05:23:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:01.579 05:23:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:01.579 05:23:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:01.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:01.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.541 ms 00:08:01.579 00:08:01.579 --- 10.0.0.2 ping statistics --- 00:08:01.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.579 rtt min/avg/max/mdev = 0.541/0.541/0.541/0.000 ms 00:08:01.579 05:23:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:01.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:01.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:08:01.579 00:08:01.579 --- 10.0.0.1 ping statistics --- 00:08:01.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.579 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:08:01.579 05:23:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:01.579 05:23:04 -- nvmf/common.sh@410 -- # return 0 00:08:01.579 05:23:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:01.579 05:23:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:01.579 05:23:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:01.579 05:23:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:01.579 05:23:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:01.579 05:23:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:01.579 05:23:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:01.579 05:23:04 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:01.579 05:23:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:01.579 05:23:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:01.579 05:23:04 -- common/autotest_common.sh@10 -- # set +x 00:08:01.579 05:23:04 -- nvmf/common.sh@469 -- # nvmfpid=1648938 00:08:01.579 05:23:04 -- nvmf/common.sh@470 -- # waitforlisten 1648938 00:08:01.579 05:23:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:01.579 05:23:04 -- common/autotest_common.sh@829 -- # '[' -z 1648938 ']' 00:08:01.579 05:23:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.579 05:23:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:01.579 05:23:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.579 05:23:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:01.579 05:23:04 -- common/autotest_common.sh@10 -- # set +x 00:08:01.579 [2024-12-07 05:23:04.175143] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:01.579 [2024-12-07 05:23:04.175188] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.579 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.579 [2024-12-07 05:23:04.243349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:01.579 [2024-12-07 05:23:04.307182] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:01.579 [2024-12-07 05:23:04.307316] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:01.579 [2024-12-07 05:23:04.307327] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:01.579 [2024-12-07 05:23:04.307336] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:01.579 [2024-12-07 05:23:04.307477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.579 [2024-12-07 05:23:04.307590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.579 [2024-12-07 05:23:04.307746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.579 [2024-12-07 05:23:04.307746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:01.840 05:23:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:01.840 05:23:04 -- common/autotest_common.sh@862 -- # return 0 00:08:01.840 05:23:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:01.840 05:23:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:01.840 05:23:04 -- common/autotest_common.sh@10 -- # set +x 00:08:01.840 05:23:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:01.840 05:23:04 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:01.840 05:23:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.840 05:23:05 -- common/autotest_common.sh@10 -- # set +x 00:08:01.840 [2024-12-07 05:23:05.007242] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:01.840 05:23:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.840 05:23:05 -- target/discovery.sh@26 -- # seq 1 4 00:08:01.840 05:23:05 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:01.840 05:23:05 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:01.840 05:23:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.840 05:23:05 -- common/autotest_common.sh@10 -- # set +x 00:08:01.840 Null1 00:08:01.840 05:23:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.840 05:23:05 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:01.840 05:23:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.840 05:23:05 -- common/autotest_common.sh@10 -- # set +x 00:08:01.840 05:23:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.840 05:23:05 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:01.840 05:23:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.840 05:23:05 -- common/autotest_common.sh@10 -- # set +x 00:08:01.840 05:23:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.840 05:23:05 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:01.840 05:23:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.840 05:23:05 -- common/autotest_common.sh@10 -- # set +x 00:08:01.841 [2024-12-07 05:23:05.067612] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:01.841 05:23:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.841 05:23:05 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:01.841 05:23:05 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:01.841 05:23:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.841 05:23:05 -- common/autotest_common.sh@10 -- # set +x 00:08:02.101 Null2 00:08:02.101 05:23:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.101 05:23:05 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:02.101 05:23:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.101 05:23:05 -- common/autotest_common.sh@10 -- # set +x 00:08:02.101 05:23:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.101 05:23:05 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:02.101 05:23:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.101 05:23:05 -- common/autotest_common.sh@10 -- # set +x 00:08:02.101 05:23:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.101 05:23:05 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:02.101 05:23:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.101 05:23:05 -- common/autotest_common.sh@10 -- # set +x 00:08:02.101 05:23:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.101 05:23:05 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:02.101 05:23:05 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:02.101 05:23:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.101 05:23:05 -- common/autotest_common.sh@10 -- # set +x 00:08:02.101 Null3 00:08:02.101 05:23:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.101 05:23:05 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:02.101 05:23:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.101 05:23:05 -- common/autotest_common.sh@10 -- # set +x 00:08:02.101 05:23:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.101 05:23:05 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:02.101 05:23:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.101 05:23:05 -- common/autotest_common.sh@10 -- # set +x 00:08:02.101 05:23:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.101 05:23:05 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:02.101 05:23:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.101 05:23:05 -- common/autotest_common.sh@10 -- # set +x 00:08:02.101 05:23:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.101 05:23:05 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:02.101 05:23:05 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:02.101 05:23:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.101 05:23:05 -- common/autotest_common.sh@10 -- # set +x 00:08:02.101 Null4 00:08:02.101 05:23:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.101 05:23:05 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:02.101 05:23:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.101 05:23:05 -- common/autotest_common.sh@10 -- # set +x 00:08:02.101 05:23:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.101 05:23:05 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:02.101 05:23:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.101 05:23:05 -- common/autotest_common.sh@10 -- # set +x 00:08:02.101 05:23:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.101 05:23:05 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:02.101 05:23:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.101 05:23:05 -- common/autotest_common.sh@10 -- # set +x 00:08:02.101 05:23:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.101 05:23:05 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:02.101 05:23:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.101 05:23:05 -- common/autotest_common.sh@10 -- # set +x 00:08:02.101 05:23:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.101 05:23:05 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:02.101 05:23:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.101 05:23:05 -- common/autotest_common.sh@10 -- # set +x 00:08:02.101 05:23:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.101 05:23:05 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:08:02.361 00:08:02.361 Discovery Log Number of Records 6, Generation counter 6 00:08:02.362 =====Discovery Log Entry 0====== 00:08:02.362 trtype: tcp 00:08:02.362 adrfam: ipv4 00:08:02.362 subtype: current discovery subsystem 00:08:02.362 treq: not required 00:08:02.362 portid: 0 00:08:02.362 trsvcid: 4420 00:08:02.362 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:02.362 traddr: 10.0.0.2 00:08:02.362 eflags: explicit discovery connections, duplicate discovery information 00:08:02.362 sectype: none 00:08:02.362 =====Discovery Log Entry 1====== 00:08:02.362 trtype: tcp 00:08:02.362 adrfam: ipv4 00:08:02.362 subtype: nvme subsystem 00:08:02.362 treq: not required 00:08:02.362 portid: 0 00:08:02.362 trsvcid: 4420 00:08:02.362 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:02.362 traddr: 10.0.0.2 00:08:02.362 eflags: none 00:08:02.362 sectype: none 00:08:02.362 =====Discovery Log Entry 2====== 00:08:02.362 trtype: tcp 00:08:02.362 adrfam: ipv4 00:08:02.362 subtype: nvme subsystem 00:08:02.362 treq: not required 00:08:02.362 portid: 0 00:08:02.362 trsvcid: 4420 00:08:02.362 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:02.362 traddr: 10.0.0.2 00:08:02.362 eflags: none 00:08:02.362 sectype: none 00:08:02.362 =====Discovery Log Entry 3====== 00:08:02.362 trtype: tcp 00:08:02.362 adrfam: ipv4 00:08:02.362 subtype: nvme subsystem 00:08:02.362 treq: not required 00:08:02.362 portid: 0 00:08:02.362 trsvcid: 4420 00:08:02.362 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:02.362 traddr: 10.0.0.2 00:08:02.362 eflags: none 00:08:02.362 sectype: none 00:08:02.362 =====Discovery Log Entry 4====== 00:08:02.362 trtype: tcp 00:08:02.362 adrfam: ipv4 00:08:02.362 subtype: nvme subsystem 00:08:02.362 treq: not required 00:08:02.362 portid: 0 00:08:02.362 trsvcid: 4420 00:08:02.362 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:02.362 traddr: 10.0.0.2 00:08:02.362 eflags: none 00:08:02.362 sectype: none 00:08:02.362 =====Discovery Log Entry 5====== 00:08:02.362 trtype: tcp 00:08:02.362 adrfam: ipv4 00:08:02.362 subtype: discovery subsystem referral 00:08:02.362 treq: not required 00:08:02.362 portid: 0 00:08:02.362 trsvcid: 4430 00:08:02.362 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:02.362 traddr: 10.0.0.2 00:08:02.362 eflags: none 00:08:02.362 sectype: none 00:08:02.362 05:23:05 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:02.362 Perform nvmf subsystem discovery via RPC 00:08:02.362 05:23:05 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:02.362 05:23:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.362 05:23:05 -- common/autotest_common.sh@10 -- # set +x 00:08:02.362 [2024-12-07 05:23:05.464765] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:02.362 [ 00:08:02.362 { 00:08:02.362 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:02.362 "subtype": "Discovery", 00:08:02.362 "listen_addresses": [ 00:08:02.362 { 00:08:02.362 "transport": "TCP", 00:08:02.362 "trtype": "TCP", 00:08:02.362 "adrfam": "IPv4", 00:08:02.362 "traddr": "10.0.0.2", 00:08:02.362 "trsvcid": "4420" 00:08:02.362 } 00:08:02.362 ], 00:08:02.362 "allow_any_host": true, 00:08:02.362 "hosts": [] 00:08:02.362 }, 00:08:02.362 { 00:08:02.362 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:02.362 "subtype": "NVMe", 00:08:02.362 "listen_addresses": [ 00:08:02.362 { 00:08:02.362 "transport": "TCP", 00:08:02.362 "trtype": "TCP", 00:08:02.362 "adrfam": "IPv4", 00:08:02.362 "traddr": "10.0.0.2", 00:08:02.362 "trsvcid": "4420" 00:08:02.362 } 00:08:02.362 ], 00:08:02.362 "allow_any_host": true, 00:08:02.362 "hosts": [], 00:08:02.362 "serial_number": "SPDK00000000000001", 00:08:02.362 "model_number": "SPDK bdev Controller", 00:08:02.362 "max_namespaces": 32, 00:08:02.362 "min_cntlid": 1, 00:08:02.362 "max_cntlid": 65519, 00:08:02.362 "namespaces": [ 00:08:02.362 { 00:08:02.362 "nsid": 1, 00:08:02.362 "bdev_name": "Null1", 00:08:02.362 "name": "Null1", 00:08:02.362 "nguid": "EA54512F8855443998B512E1C0FE0895", 00:08:02.362 "uuid": "ea54512f-8855-4439-98b5-12e1c0fe0895" 00:08:02.362 } 00:08:02.362 ] 00:08:02.362 }, 00:08:02.362 { 00:08:02.362 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:02.362 "subtype": "NVMe", 00:08:02.362 "listen_addresses": [ 00:08:02.362 { 00:08:02.362 "transport": "TCP", 00:08:02.362 "trtype": "TCP", 00:08:02.362 "adrfam": "IPv4", 00:08:02.362 "traddr": "10.0.0.2", 00:08:02.362 "trsvcid": "4420" 00:08:02.362 } 00:08:02.362 ], 00:08:02.362 "allow_any_host": true, 00:08:02.362 "hosts": [], 00:08:02.362 "serial_number": "SPDK00000000000002", 00:08:02.362 "model_number": "SPDK bdev Controller", 00:08:02.362 "max_namespaces": 32, 00:08:02.362 "min_cntlid": 1, 00:08:02.362 "max_cntlid": 65519, 00:08:02.362 "namespaces": [ 00:08:02.362 { 00:08:02.362 "nsid": 1, 00:08:02.362 "bdev_name": "Null2", 00:08:02.362 "name": "Null2", 00:08:02.362 "nguid": "74FF8C6CC3974BC7915AF646E8DA8968", 00:08:02.362 "uuid": "74ff8c6c-c397-4bc7-915a-f646e8da8968" 00:08:02.362 } 00:08:02.362 ] 00:08:02.362 }, 00:08:02.362 { 00:08:02.362 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:02.362 "subtype": "NVMe", 00:08:02.362 "listen_addresses": [ 00:08:02.362 { 00:08:02.362 "transport": "TCP", 00:08:02.362 "trtype": "TCP", 00:08:02.362 "adrfam": "IPv4", 00:08:02.362 "traddr": "10.0.0.2", 00:08:02.362 "trsvcid": "4420" 00:08:02.362 } 00:08:02.362 ], 00:08:02.362 "allow_any_host": true, 00:08:02.362 "hosts": [], 00:08:02.362 "serial_number": "SPDK00000000000003", 00:08:02.362 "model_number": "SPDK bdev Controller", 00:08:02.362 "max_namespaces": 32, 00:08:02.362 "min_cntlid": 1, 00:08:02.362 "max_cntlid": 65519, 00:08:02.362 "namespaces": [ 00:08:02.362 { 00:08:02.362 "nsid": 1, 00:08:02.362 "bdev_name": "Null3", 00:08:02.362 "name": "Null3", 00:08:02.362 "nguid": "7EB07316B70C4088A834BB206ABA16AA", 00:08:02.362 "uuid": "7eb07316-b70c-4088-a834-bb206aba16aa" 00:08:02.362 } 00:08:02.362 ] 00:08:02.362 }, 00:08:02.362 { 00:08:02.362 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:02.362 "subtype": "NVMe", 00:08:02.362 "listen_addresses": [ 00:08:02.362 { 00:08:02.362 "transport": "TCP", 00:08:02.362 "trtype": "TCP", 00:08:02.362 "adrfam": "IPv4", 00:08:02.362 "traddr": "10.0.0.2", 00:08:02.362 "trsvcid": "4420" 00:08:02.362 } 00:08:02.362 ], 00:08:02.362 "allow_any_host": true, 00:08:02.362 "hosts": [], 00:08:02.362 "serial_number": "SPDK00000000000004", 00:08:02.362 "model_number": "SPDK bdev Controller", 00:08:02.362 "max_namespaces": 32, 00:08:02.362 "min_cntlid": 1, 00:08:02.362 "max_cntlid": 65519, 00:08:02.362 "namespaces": [ 00:08:02.362 { 00:08:02.362 "nsid": 1, 00:08:02.362 "bdev_name": "Null4", 00:08:02.362 "name": "Null4", 00:08:02.362 "nguid": "AAAF00B185124ACAB53670904C9A76AA", 00:08:02.362 "uuid": "aaaf00b1-8512-4aca-b536-70904c9a76aa" 00:08:02.362 } 00:08:02.362 ] 00:08:02.362 } 00:08:02.362 ] 00:08:02.362 05:23:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.362 05:23:05 -- target/discovery.sh@42 -- # seq 1 4 00:08:02.362 05:23:05 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:02.362 05:23:05 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:02.362 05:23:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.362 05:23:05 -- common/autotest_common.sh@10 -- # set +x 00:08:02.362 05:23:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.362 05:23:05 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:02.362 05:23:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.362 05:23:05 -- common/autotest_common.sh@10 -- # set +x 00:08:02.362 05:23:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.362 05:23:05 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:02.362 05:23:05 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:02.362 05:23:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.362 05:23:05 -- common/autotest_common.sh@10 -- # set +x 00:08:02.362 05:23:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.362 05:23:05 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:02.362 05:23:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.362 05:23:05 -- common/autotest_common.sh@10 -- # set +x 00:08:02.362 05:23:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.362 05:23:05 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:02.362 05:23:05 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:02.362 05:23:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.362 05:23:05 -- common/autotest_common.sh@10 -- # set +x 00:08:02.362 05:23:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.362 05:23:05 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:02.362 05:23:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.362 05:23:05 -- common/autotest_common.sh@10 -- # set +x 00:08:02.362 05:23:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.362 05:23:05 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:02.362 05:23:05 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:02.362 05:23:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.362 05:23:05 -- common/autotest_common.sh@10 -- # set +x 00:08:02.362 05:23:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.362 05:23:05 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:02.362 05:23:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.363 05:23:05 -- common/autotest_common.sh@10 -- # set +x 00:08:02.363 05:23:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.363 05:23:05 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:02.363 05:23:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.363 05:23:05 -- common/autotest_common.sh@10 -- # set +x 00:08:02.363 05:23:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.363 05:23:05 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:02.363 05:23:05 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:02.363 05:23:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.363 05:23:05 -- common/autotest_common.sh@10 -- # set +x 00:08:02.624 05:23:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.624 05:23:05 -- target/discovery.sh@49 -- # check_bdevs= 00:08:02.624 05:23:05 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:02.624 05:23:05 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:02.624 05:23:05 -- target/discovery.sh@57 -- # nvmftestfini 00:08:02.624 05:23:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:02.624 05:23:05 -- nvmf/common.sh@116 -- # sync 00:08:02.624 05:23:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:02.624 05:23:05 -- nvmf/common.sh@119 -- # set +e 00:08:02.624 05:23:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:02.624 05:23:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:02.624 rmmod nvme_tcp 00:08:02.624 rmmod nvme_fabrics 00:08:02.624 rmmod nvme_keyring 00:08:02.624 05:23:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:02.624 05:23:05 -- nvmf/common.sh@123 -- # set -e 00:08:02.624 05:23:05 -- nvmf/common.sh@124 -- # return 0 00:08:02.624 05:23:05 -- nvmf/common.sh@477 -- # '[' -n 1648938 ']' 00:08:02.624 05:23:05 -- nvmf/common.sh@478 -- # killprocess 1648938 00:08:02.624 05:23:05 -- common/autotest_common.sh@936 -- # '[' -z 1648938 ']' 00:08:02.624 05:23:05 -- common/autotest_common.sh@940 -- # kill -0 1648938 00:08:02.624 05:23:05 -- common/autotest_common.sh@941 -- # uname 00:08:02.624 05:23:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:02.624 05:23:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1648938 00:08:02.624 05:23:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:02.624 05:23:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:02.624 05:23:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1648938' 00:08:02.624 killing process with pid 1648938 00:08:02.624 05:23:05 -- common/autotest_common.sh@955 -- # kill 1648938 00:08:02.625 [2024-12-07 05:23:05.770157] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:02.625 05:23:05 -- common/autotest_common.sh@960 -- # wait 1648938 00:08:02.886 05:23:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:02.886 05:23:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:02.886 05:23:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:02.886 05:23:05 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:02.886 05:23:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:02.886 05:23:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.886 05:23:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:02.886 05:23:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.799 05:23:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:04.799 00:08:04.799 real 0m11.557s 00:08:04.799 user 0m8.725s 00:08:04.799 sys 0m5.935s 00:08:04.799 05:23:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:04.799 05:23:07 -- common/autotest_common.sh@10 -- # set +x 00:08:04.799 ************************************ 00:08:04.799 END TEST nvmf_discovery 00:08:04.799 ************************************ 00:08:04.799 05:23:08 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:04.799 05:23:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:04.799 05:23:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:04.799 05:23:08 -- common/autotest_common.sh@10 -- # set +x 00:08:04.799 ************************************ 00:08:04.799 START TEST nvmf_referrals 00:08:04.799 ************************************ 00:08:04.799 05:23:08 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:05.061 * Looking for test storage... 00:08:05.061 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:05.061 05:23:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:05.061 05:23:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:05.061 05:23:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:05.061 05:23:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:05.061 05:23:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:05.061 05:23:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:05.061 05:23:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:05.061 05:23:08 -- scripts/common.sh@335 -- # IFS=.-: 00:08:05.061 05:23:08 -- scripts/common.sh@335 -- # read -ra ver1 00:08:05.061 05:23:08 -- scripts/common.sh@336 -- # IFS=.-: 00:08:05.061 05:23:08 -- scripts/common.sh@336 -- # read -ra ver2 00:08:05.061 05:23:08 -- scripts/common.sh@337 -- # local 'op=<' 00:08:05.061 05:23:08 -- scripts/common.sh@339 -- # ver1_l=2 00:08:05.061 05:23:08 -- scripts/common.sh@340 -- # ver2_l=1 00:08:05.061 05:23:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:05.061 05:23:08 -- scripts/common.sh@343 -- # case "$op" in 00:08:05.061 05:23:08 -- scripts/common.sh@344 -- # : 1 00:08:05.061 05:23:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:05.061 05:23:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:05.061 05:23:08 -- scripts/common.sh@364 -- # decimal 1 00:08:05.061 05:23:08 -- scripts/common.sh@352 -- # local d=1 00:08:05.061 05:23:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:05.061 05:23:08 -- scripts/common.sh@354 -- # echo 1 00:08:05.061 05:23:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:05.061 05:23:08 -- scripts/common.sh@365 -- # decimal 2 00:08:05.061 05:23:08 -- scripts/common.sh@352 -- # local d=2 00:08:05.061 05:23:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:05.061 05:23:08 -- scripts/common.sh@354 -- # echo 2 00:08:05.061 05:23:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:05.061 05:23:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:05.061 05:23:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:05.061 05:23:08 -- scripts/common.sh@367 -- # return 0 00:08:05.061 05:23:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:05.061 05:23:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:05.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.061 --rc genhtml_branch_coverage=1 00:08:05.061 --rc genhtml_function_coverage=1 00:08:05.061 --rc genhtml_legend=1 00:08:05.061 --rc geninfo_all_blocks=1 00:08:05.061 --rc geninfo_unexecuted_blocks=1 00:08:05.061 00:08:05.061 ' 00:08:05.061 05:23:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:05.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.061 --rc genhtml_branch_coverage=1 00:08:05.061 --rc genhtml_function_coverage=1 00:08:05.061 --rc genhtml_legend=1 00:08:05.061 --rc geninfo_all_blocks=1 00:08:05.061 --rc geninfo_unexecuted_blocks=1 00:08:05.061 00:08:05.061 ' 00:08:05.061 05:23:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:05.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.061 --rc genhtml_branch_coverage=1 00:08:05.061 --rc genhtml_function_coverage=1 00:08:05.061 --rc genhtml_legend=1 00:08:05.061 --rc geninfo_all_blocks=1 00:08:05.061 --rc geninfo_unexecuted_blocks=1 00:08:05.061 00:08:05.061 ' 00:08:05.061 05:23:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:05.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.061 --rc genhtml_branch_coverage=1 00:08:05.061 --rc genhtml_function_coverage=1 00:08:05.061 --rc genhtml_legend=1 00:08:05.061 --rc geninfo_all_blocks=1 00:08:05.061 --rc geninfo_unexecuted_blocks=1 00:08:05.061 00:08:05.061 ' 00:08:05.061 05:23:08 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:05.061 05:23:08 -- nvmf/common.sh@7 -- # uname -s 00:08:05.061 05:23:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:05.061 05:23:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:05.061 05:23:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:05.061 05:23:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:05.061 05:23:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:05.061 05:23:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:05.061 05:23:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:05.061 05:23:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:05.061 05:23:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:05.061 05:23:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:05.061 05:23:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:05.061 05:23:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:05.061 05:23:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:05.061 05:23:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:05.061 05:23:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:05.061 05:23:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:05.061 05:23:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.061 05:23:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.061 05:23:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.061 05:23:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.061 05:23:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.062 05:23:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.062 05:23:08 -- paths/export.sh@5 -- # export PATH 00:08:05.062 05:23:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.062 05:23:08 -- nvmf/common.sh@46 -- # : 0 00:08:05.062 05:23:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:05.062 05:23:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:05.062 05:23:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:05.062 05:23:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:05.062 05:23:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:05.062 05:23:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:05.062 05:23:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:05.062 05:23:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:05.062 05:23:08 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:05.062 05:23:08 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:05.062 05:23:08 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:05.062 05:23:08 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:05.062 05:23:08 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:05.062 05:23:08 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:05.062 05:23:08 -- target/referrals.sh@37 -- # nvmftestinit 00:08:05.062 05:23:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:05.062 05:23:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:05.062 05:23:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:05.062 05:23:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:05.062 05:23:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:05.062 05:23:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.062 05:23:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:05.062 05:23:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.062 05:23:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:05.062 05:23:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:05.062 05:23:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:05.062 05:23:08 -- common/autotest_common.sh@10 -- # set +x 00:08:13.205 05:23:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:13.205 05:23:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:13.205 05:23:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:13.205 05:23:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:13.205 05:23:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:13.205 05:23:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:13.205 05:23:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:13.205 05:23:15 -- nvmf/common.sh@294 -- # net_devs=() 00:08:13.205 05:23:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:13.205 05:23:15 -- nvmf/common.sh@295 -- # e810=() 00:08:13.205 05:23:15 -- nvmf/common.sh@295 -- # local -ga e810 00:08:13.205 05:23:15 -- nvmf/common.sh@296 -- # x722=() 00:08:13.205 05:23:15 -- nvmf/common.sh@296 -- # local -ga x722 00:08:13.205 05:23:15 -- nvmf/common.sh@297 -- # mlx=() 00:08:13.205 05:23:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:13.205 05:23:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:13.205 05:23:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:13.205 05:23:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:13.205 05:23:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:13.206 05:23:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:13.206 05:23:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:13.206 05:23:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:13.206 05:23:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:13.206 05:23:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:13.206 05:23:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:13.206 05:23:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:13.206 05:23:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:13.206 05:23:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:13.206 05:23:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:13.206 05:23:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:13.206 05:23:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:13.206 05:23:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:13.206 05:23:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:13.206 05:23:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:13.206 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:13.206 05:23:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:13.206 05:23:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:13.206 05:23:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.206 05:23:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.206 05:23:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:13.206 05:23:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:13.206 05:23:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:13.206 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:13.206 05:23:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:13.206 05:23:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:13.206 05:23:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.206 05:23:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.206 05:23:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:13.206 05:23:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:13.206 05:23:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:13.206 05:23:15 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:13.206 05:23:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:13.206 05:23:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.206 05:23:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:13.206 05:23:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.206 05:23:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:13.206 Found net devices under 0000:31:00.0: cvl_0_0 00:08:13.206 05:23:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.206 05:23:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:13.206 05:23:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.206 05:23:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:13.206 05:23:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.206 05:23:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:13.206 Found net devices under 0000:31:00.1: cvl_0_1 00:08:13.206 05:23:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.206 05:23:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:13.206 05:23:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:13.206 05:23:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:13.206 05:23:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:13.206 05:23:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:13.206 05:23:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:13.206 05:23:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:13.206 05:23:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:13.206 05:23:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:13.206 05:23:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:13.206 05:23:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:13.206 05:23:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:13.206 05:23:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:13.206 05:23:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:13.206 05:23:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:13.206 05:23:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:13.206 05:23:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:13.206 05:23:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:13.206 05:23:15 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:13.206 05:23:15 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:13.206 05:23:15 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:13.206 05:23:15 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:13.206 05:23:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:13.206 05:23:15 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:13.206 05:23:15 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:13.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:13.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:08:13.206 00:08:13.206 --- 10.0.0.2 ping statistics --- 00:08:13.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.206 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:08:13.206 05:23:15 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:13.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:13.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:08:13.206 00:08:13.206 --- 10.0.0.1 ping statistics --- 00:08:13.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.206 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:08:13.206 05:23:15 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:13.206 05:23:15 -- nvmf/common.sh@410 -- # return 0 00:08:13.206 05:23:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:13.206 05:23:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:13.206 05:23:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:13.206 05:23:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:13.206 05:23:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:13.206 05:23:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:13.206 05:23:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:13.206 05:23:15 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:13.206 05:23:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:13.206 05:23:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:13.206 05:23:15 -- common/autotest_common.sh@10 -- # set +x 00:08:13.206 05:23:15 -- nvmf/common.sh@469 -- # nvmfpid=1653624 00:08:13.206 05:23:15 -- nvmf/common.sh@470 -- # waitforlisten 1653624 00:08:13.206 05:23:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:13.206 05:23:15 -- common/autotest_common.sh@829 -- # '[' -z 1653624 ']' 00:08:13.206 05:23:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.206 05:23:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:13.206 05:23:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.206 05:23:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:13.206 05:23:15 -- common/autotest_common.sh@10 -- # set +x 00:08:13.206 [2024-12-07 05:23:15.713977] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:13.206 [2024-12-07 05:23:15.714050] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.206 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.206 [2024-12-07 05:23:15.783655] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:13.206 [2024-12-07 05:23:15.847920] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:13.206 [2024-12-07 05:23:15.848055] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:13.206 [2024-12-07 05:23:15.848066] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:13.206 [2024-12-07 05:23:15.848075] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:13.206 [2024-12-07 05:23:15.848142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.206 [2024-12-07 05:23:15.848264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.206 [2024-12-07 05:23:15.848419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.206 [2024-12-07 05:23:15.848420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:13.465 05:23:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:13.465 05:23:16 -- common/autotest_common.sh@862 -- # return 0 00:08:13.465 05:23:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:13.465 05:23:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:13.465 05:23:16 -- common/autotest_common.sh@10 -- # set +x 00:08:13.465 05:23:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:13.465 05:23:16 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:13.465 05:23:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.465 05:23:16 -- common/autotest_common.sh@10 -- # set +x 00:08:13.465 [2024-12-07 05:23:16.537273] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:13.465 05:23:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.465 05:23:16 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:13.465 05:23:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.465 05:23:16 -- common/autotest_common.sh@10 -- # set +x 00:08:13.465 [2024-12-07 05:23:16.553493] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:13.465 05:23:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.465 05:23:16 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:13.465 05:23:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.465 05:23:16 -- common/autotest_common.sh@10 -- # set +x 00:08:13.465 05:23:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.465 05:23:16 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:13.465 05:23:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.465 05:23:16 -- common/autotest_common.sh@10 -- # set +x 00:08:13.465 05:23:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.465 05:23:16 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:13.465 05:23:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.465 05:23:16 -- common/autotest_common.sh@10 -- # set +x 00:08:13.465 05:23:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.465 05:23:16 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:13.465 05:23:16 -- target/referrals.sh@48 -- # jq length 00:08:13.465 05:23:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.465 05:23:16 -- common/autotest_common.sh@10 -- # set +x 00:08:13.465 05:23:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.465 05:23:16 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:13.465 05:23:16 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:13.465 05:23:16 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:13.465 05:23:16 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:13.465 05:23:16 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:13.465 05:23:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.465 05:23:16 -- target/referrals.sh@21 -- # sort 00:08:13.465 05:23:16 -- common/autotest_common.sh@10 -- # set +x 00:08:13.465 05:23:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.465 05:23:16 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:13.465 05:23:16 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:13.465 05:23:16 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:13.465 05:23:16 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:13.465 05:23:16 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:13.465 05:23:16 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.465 05:23:16 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:13.465 05:23:16 -- target/referrals.sh@26 -- # sort 00:08:13.725 05:23:16 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:13.725 05:23:16 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:13.725 05:23:16 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:13.725 05:23:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.725 05:23:16 -- common/autotest_common.sh@10 -- # set +x 00:08:13.725 05:23:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.725 05:23:16 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:13.725 05:23:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.725 05:23:16 -- common/autotest_common.sh@10 -- # set +x 00:08:13.725 05:23:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.725 05:23:16 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:13.725 05:23:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.725 05:23:16 -- common/autotest_common.sh@10 -- # set +x 00:08:13.725 05:23:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.725 05:23:16 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:13.725 05:23:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.725 05:23:16 -- target/referrals.sh@56 -- # jq length 00:08:13.725 05:23:16 -- common/autotest_common.sh@10 -- # set +x 00:08:13.725 05:23:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.725 05:23:16 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:13.725 05:23:16 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:13.725 05:23:16 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:13.725 05:23:16 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:13.725 05:23:16 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.725 05:23:16 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:13.725 05:23:16 -- target/referrals.sh@26 -- # sort 00:08:13.986 05:23:17 -- target/referrals.sh@26 -- # echo 00:08:13.986 05:23:17 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:13.986 05:23:17 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:13.986 05:23:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.986 05:23:17 -- common/autotest_common.sh@10 -- # set +x 00:08:13.986 05:23:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.986 05:23:17 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:13.986 05:23:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.986 05:23:17 -- common/autotest_common.sh@10 -- # set +x 00:08:13.986 05:23:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.986 05:23:17 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:13.986 05:23:17 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:13.986 05:23:17 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:13.986 05:23:17 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:13.986 05:23:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.986 05:23:17 -- target/referrals.sh@21 -- # sort 00:08:13.986 05:23:17 -- common/autotest_common.sh@10 -- # set +x 00:08:13.986 05:23:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.986 05:23:17 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:13.986 05:23:17 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:13.986 05:23:17 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:13.986 05:23:17 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:13.986 05:23:17 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:13.986 05:23:17 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.986 05:23:17 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:13.987 05:23:17 -- target/referrals.sh@26 -- # sort 00:08:14.246 05:23:17 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:14.246 05:23:17 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:14.246 05:23:17 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:14.246 05:23:17 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:14.246 05:23:17 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:14.246 05:23:17 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:14.246 05:23:17 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:14.505 05:23:17 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:14.505 05:23:17 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:14.505 05:23:17 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:14.505 05:23:17 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:14.505 05:23:17 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:14.505 05:23:17 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:14.766 05:23:17 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:14.766 05:23:17 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:14.766 05:23:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.766 05:23:17 -- common/autotest_common.sh@10 -- # set +x 00:08:14.766 05:23:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.766 05:23:17 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:14.766 05:23:17 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:14.766 05:23:17 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:14.766 05:23:17 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:14.766 05:23:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.766 05:23:17 -- target/referrals.sh@21 -- # sort 00:08:14.766 05:23:17 -- common/autotest_common.sh@10 -- # set +x 00:08:14.766 05:23:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.766 05:23:17 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:14.766 05:23:17 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:14.766 05:23:17 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:14.766 05:23:17 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:14.766 05:23:17 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:14.766 05:23:17 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:14.766 05:23:17 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:14.766 05:23:17 -- target/referrals.sh@26 -- # sort 00:08:15.026 05:23:18 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:15.026 05:23:18 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:15.026 05:23:18 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:15.026 05:23:18 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:15.026 05:23:18 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:15.026 05:23:18 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:15.026 05:23:18 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:15.026 05:23:18 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:15.026 05:23:18 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:15.026 05:23:18 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:15.026 05:23:18 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:15.026 05:23:18 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:15.026 05:23:18 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:15.286 05:23:18 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:15.286 05:23:18 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:15.286 05:23:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.286 05:23:18 -- common/autotest_common.sh@10 -- # set +x 00:08:15.286 05:23:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.286 05:23:18 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:15.286 05:23:18 -- target/referrals.sh@82 -- # jq length 00:08:15.286 05:23:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.286 05:23:18 -- common/autotest_common.sh@10 -- # set +x 00:08:15.286 05:23:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.286 05:23:18 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:15.286 05:23:18 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:15.286 05:23:18 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:15.286 05:23:18 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:15.286 05:23:18 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:15.286 05:23:18 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:15.286 05:23:18 -- target/referrals.sh@26 -- # sort 00:08:15.546 05:23:18 -- target/referrals.sh@26 -- # echo 00:08:15.546 05:23:18 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:15.546 05:23:18 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:15.546 05:23:18 -- target/referrals.sh@86 -- # nvmftestfini 00:08:15.546 05:23:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:15.546 05:23:18 -- nvmf/common.sh@116 -- # sync 00:08:15.546 05:23:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:15.546 05:23:18 -- nvmf/common.sh@119 -- # set +e 00:08:15.546 05:23:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:15.546 05:23:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:15.546 rmmod nvme_tcp 00:08:15.546 rmmod nvme_fabrics 00:08:15.546 rmmod nvme_keyring 00:08:15.546 05:23:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:15.546 05:23:18 -- nvmf/common.sh@123 -- # set -e 00:08:15.546 05:23:18 -- nvmf/common.sh@124 -- # return 0 00:08:15.546 05:23:18 -- nvmf/common.sh@477 -- # '[' -n 1653624 ']' 00:08:15.546 05:23:18 -- nvmf/common.sh@478 -- # killprocess 1653624 00:08:15.546 05:23:18 -- common/autotest_common.sh@936 -- # '[' -z 1653624 ']' 00:08:15.546 05:23:18 -- common/autotest_common.sh@940 -- # kill -0 1653624 00:08:15.546 05:23:18 -- common/autotest_common.sh@941 -- # uname 00:08:15.546 05:23:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:15.546 05:23:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1653624 00:08:15.546 05:23:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:15.546 05:23:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:15.546 05:23:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1653624' 00:08:15.546 killing process with pid 1653624 00:08:15.546 05:23:18 -- common/autotest_common.sh@955 -- # kill 1653624 00:08:15.546 05:23:18 -- common/autotest_common.sh@960 -- # wait 1653624 00:08:15.806 05:23:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:15.806 05:23:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:15.806 05:23:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:15.806 05:23:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:15.806 05:23:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:15.806 05:23:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.806 05:23:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:15.806 05:23:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.717 05:23:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:17.717 00:08:17.717 real 0m12.928s 00:08:17.717 user 0m15.158s 00:08:17.717 sys 0m6.306s 00:08:17.717 05:23:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:17.717 05:23:20 -- common/autotest_common.sh@10 -- # set +x 00:08:17.717 ************************************ 00:08:17.717 END TEST nvmf_referrals 00:08:17.717 ************************************ 00:08:17.977 05:23:20 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:17.977 05:23:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:17.977 05:23:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:17.977 05:23:20 -- common/autotest_common.sh@10 -- # set +x 00:08:17.977 ************************************ 00:08:17.977 START TEST nvmf_connect_disconnect 00:08:17.977 ************************************ 00:08:17.977 05:23:20 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:17.977 * Looking for test storage... 00:08:17.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.977 05:23:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:17.977 05:23:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:17.977 05:23:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:17.977 05:23:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:17.977 05:23:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:17.977 05:23:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:17.977 05:23:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:17.977 05:23:21 -- scripts/common.sh@335 -- # IFS=.-: 00:08:17.977 05:23:21 -- scripts/common.sh@335 -- # read -ra ver1 00:08:17.977 05:23:21 -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.977 05:23:21 -- scripts/common.sh@336 -- # read -ra ver2 00:08:17.977 05:23:21 -- scripts/common.sh@337 -- # local 'op=<' 00:08:17.977 05:23:21 -- scripts/common.sh@339 -- # ver1_l=2 00:08:17.977 05:23:21 -- scripts/common.sh@340 -- # ver2_l=1 00:08:17.977 05:23:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:17.977 05:23:21 -- scripts/common.sh@343 -- # case "$op" in 00:08:17.977 05:23:21 -- scripts/common.sh@344 -- # : 1 00:08:17.977 05:23:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:17.977 05:23:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.977 05:23:21 -- scripts/common.sh@364 -- # decimal 1 00:08:17.977 05:23:21 -- scripts/common.sh@352 -- # local d=1 00:08:17.977 05:23:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.977 05:23:21 -- scripts/common.sh@354 -- # echo 1 00:08:17.977 05:23:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:17.977 05:23:21 -- scripts/common.sh@365 -- # decimal 2 00:08:17.977 05:23:21 -- scripts/common.sh@352 -- # local d=2 00:08:17.977 05:23:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.977 05:23:21 -- scripts/common.sh@354 -- # echo 2 00:08:17.977 05:23:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:17.977 05:23:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:17.977 05:23:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:17.977 05:23:21 -- scripts/common.sh@367 -- # return 0 00:08:17.977 05:23:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.977 05:23:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:17.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.977 --rc genhtml_branch_coverage=1 00:08:17.977 --rc genhtml_function_coverage=1 00:08:17.977 --rc genhtml_legend=1 00:08:17.977 --rc geninfo_all_blocks=1 00:08:17.977 --rc geninfo_unexecuted_blocks=1 00:08:17.977 00:08:17.977 ' 00:08:17.977 05:23:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:17.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.977 --rc genhtml_branch_coverage=1 00:08:17.977 --rc genhtml_function_coverage=1 00:08:17.977 --rc genhtml_legend=1 00:08:17.977 --rc geninfo_all_blocks=1 00:08:17.977 --rc geninfo_unexecuted_blocks=1 00:08:17.977 00:08:17.977 ' 00:08:17.977 05:23:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:17.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.977 --rc genhtml_branch_coverage=1 00:08:17.978 --rc genhtml_function_coverage=1 00:08:17.978 --rc genhtml_legend=1 00:08:17.978 --rc geninfo_all_blocks=1 00:08:17.978 --rc geninfo_unexecuted_blocks=1 00:08:17.978 00:08:17.978 ' 00:08:17.978 05:23:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:17.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.978 --rc genhtml_branch_coverage=1 00:08:17.978 --rc genhtml_function_coverage=1 00:08:17.978 --rc genhtml_legend=1 00:08:17.978 --rc geninfo_all_blocks=1 00:08:17.978 --rc geninfo_unexecuted_blocks=1 00:08:17.978 00:08:17.978 ' 00:08:17.978 05:23:21 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:17.978 05:23:21 -- nvmf/common.sh@7 -- # uname -s 00:08:17.978 05:23:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.978 05:23:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.978 05:23:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.978 05:23:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.978 05:23:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.978 05:23:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.978 05:23:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.978 05:23:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.978 05:23:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.978 05:23:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.978 05:23:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:17.978 05:23:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:17.978 05:23:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.978 05:23:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.978 05:23:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:17.978 05:23:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:17.978 05:23:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.978 05:23:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.978 05:23:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.978 05:23:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.978 05:23:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.978 05:23:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.978 05:23:21 -- paths/export.sh@5 -- # export PATH 00:08:17.978 05:23:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.978 05:23:21 -- nvmf/common.sh@46 -- # : 0 00:08:17.978 05:23:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:17.978 05:23:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:17.978 05:23:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:17.978 05:23:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.978 05:23:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.978 05:23:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:17.978 05:23:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:17.978 05:23:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:18.238 05:23:21 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:18.238 05:23:21 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:18.238 05:23:21 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:18.238 05:23:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:18.238 05:23:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:18.238 05:23:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:18.238 05:23:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:18.238 05:23:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:18.238 05:23:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.238 05:23:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:18.238 05:23:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.238 05:23:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:18.238 05:23:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:18.238 05:23:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:18.238 05:23:21 -- common/autotest_common.sh@10 -- # set +x 00:08:26.374 05:23:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:26.374 05:23:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:26.374 05:23:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:26.374 05:23:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:26.374 05:23:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:26.374 05:23:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:26.374 05:23:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:26.374 05:23:28 -- nvmf/common.sh@294 -- # net_devs=() 00:08:26.374 05:23:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:26.374 05:23:28 -- nvmf/common.sh@295 -- # e810=() 00:08:26.374 05:23:28 -- nvmf/common.sh@295 -- # local -ga e810 00:08:26.374 05:23:28 -- nvmf/common.sh@296 -- # x722=() 00:08:26.374 05:23:28 -- nvmf/common.sh@296 -- # local -ga x722 00:08:26.374 05:23:28 -- nvmf/common.sh@297 -- # mlx=() 00:08:26.374 05:23:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:26.374 05:23:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:26.374 05:23:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:26.374 05:23:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:26.374 05:23:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:26.374 05:23:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:26.374 05:23:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:26.374 05:23:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:26.374 05:23:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:26.374 05:23:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:26.374 05:23:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:26.374 05:23:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:26.374 05:23:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:26.374 05:23:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:26.374 05:23:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:26.374 05:23:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:26.374 05:23:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:26.374 05:23:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:26.374 05:23:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:26.374 05:23:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:26.374 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:26.374 05:23:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:26.374 05:23:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:26.374 05:23:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.374 05:23:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.374 05:23:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:26.374 05:23:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:26.374 05:23:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:26.374 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:26.374 05:23:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:26.374 05:23:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:26.374 05:23:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.374 05:23:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.374 05:23:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:26.374 05:23:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:26.374 05:23:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:26.374 05:23:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:26.374 05:23:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:26.374 05:23:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.374 05:23:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:26.374 05:23:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.374 05:23:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:26.375 Found net devices under 0000:31:00.0: cvl_0_0 00:08:26.375 05:23:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.375 05:23:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:26.375 05:23:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.375 05:23:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:26.375 05:23:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.375 05:23:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:26.375 Found net devices under 0000:31:00.1: cvl_0_1 00:08:26.375 05:23:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.375 05:23:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:26.375 05:23:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:26.375 05:23:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:26.375 05:23:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:26.375 05:23:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:26.375 05:23:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:26.375 05:23:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:26.375 05:23:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:26.375 05:23:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:26.375 05:23:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:26.375 05:23:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:26.375 05:23:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:26.375 05:23:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:26.375 05:23:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:26.375 05:23:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:26.375 05:23:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:26.375 05:23:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:26.375 05:23:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:26.375 05:23:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:26.375 05:23:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:26.375 05:23:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:26.375 05:23:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:26.375 05:23:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:26.375 05:23:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:26.375 05:23:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:26.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:26.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:08:26.375 00:08:26.375 --- 10.0.0.2 ping statistics --- 00:08:26.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.375 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:08:26.375 05:23:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:26.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:26.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:08:26.375 00:08:26.375 --- 10.0.0.1 ping statistics --- 00:08:26.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.375 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:08:26.375 05:23:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:26.375 05:23:28 -- nvmf/common.sh@410 -- # return 0 00:08:26.375 05:23:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:26.375 05:23:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:26.375 05:23:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:26.375 05:23:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:26.375 05:23:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:26.375 05:23:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:26.375 05:23:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:26.375 05:23:28 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:26.375 05:23:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:26.375 05:23:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:26.375 05:23:28 -- common/autotest_common.sh@10 -- # set +x 00:08:26.375 05:23:28 -- nvmf/common.sh@469 -- # nvmfpid=1658605 00:08:26.375 05:23:28 -- nvmf/common.sh@470 -- # waitforlisten 1658605 00:08:26.375 05:23:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:26.375 05:23:28 -- common/autotest_common.sh@829 -- # '[' -z 1658605 ']' 00:08:26.375 05:23:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.375 05:23:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:26.375 05:23:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.375 05:23:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:26.375 05:23:28 -- common/autotest_common.sh@10 -- # set +x 00:08:26.375 [2024-12-07 05:23:28.784893] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:26.375 [2024-12-07 05:23:28.784956] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:26.375 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.375 [2024-12-07 05:23:28.857596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:26.375 [2024-12-07 05:23:28.929571] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:26.375 [2024-12-07 05:23:28.929709] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:26.375 [2024-12-07 05:23:28.929720] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:26.375 [2024-12-07 05:23:28.929730] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:26.375 [2024-12-07 05:23:28.929887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.375 [2024-12-07 05:23:28.930003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.375 [2024-12-07 05:23:28.930160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.375 [2024-12-07 05:23:28.930366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.375 05:23:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:26.375 05:23:29 -- common/autotest_common.sh@862 -- # return 0 00:08:26.375 05:23:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:26.375 05:23:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:26.375 05:23:29 -- common/autotest_common.sh@10 -- # set +x 00:08:26.636 05:23:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.636 05:23:29 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:26.636 05:23:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.636 05:23:29 -- common/autotest_common.sh@10 -- # set +x 00:08:26.636 [2024-12-07 05:23:29.627190] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.636 05:23:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.636 05:23:29 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:26.636 05:23:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.636 05:23:29 -- common/autotest_common.sh@10 -- # set +x 00:08:26.636 05:23:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.636 05:23:29 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:26.636 05:23:29 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:26.636 05:23:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.636 05:23:29 -- common/autotest_common.sh@10 -- # set +x 00:08:26.636 05:23:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.636 05:23:29 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:26.636 05:23:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.636 05:23:29 -- common/autotest_common.sh@10 -- # set +x 00:08:26.636 05:23:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.636 05:23:29 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:26.636 05:23:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.636 05:23:29 -- common/autotest_common.sh@10 -- # set +x 00:08:26.636 [2024-12-07 05:23:29.686599] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:26.636 05:23:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.636 05:23:29 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:26.636 05:23:29 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:26.636 05:23:29 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:26.636 05:23:29 -- target/connect_disconnect.sh@34 -- # set +x 00:08:29.183 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.638 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.486 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.199 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.196 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.833 [2024-12-07 05:25:55.529875] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38d40 is same with the state(5) to be set 00:10:52.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.387 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.480 [2024-12-07 05:26:28.240990] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38d40 is same with the state(5) to be set 00:11:25.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.642 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.189 05:27:24 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:22.189 05:27:24 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:22.189 05:27:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:22.189 05:27:24 -- nvmf/common.sh@116 -- # sync 00:12:22.189 05:27:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:22.189 05:27:24 -- nvmf/common.sh@119 -- # set +e 00:12:22.189 05:27:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:22.189 05:27:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:22.189 rmmod nvme_tcp 00:12:22.189 rmmod nvme_fabrics 00:12:22.189 rmmod nvme_keyring 00:12:22.189 05:27:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:22.189 05:27:25 -- nvmf/common.sh@123 -- # set -e 00:12:22.189 05:27:25 -- nvmf/common.sh@124 -- # return 0 00:12:22.189 05:27:25 -- nvmf/common.sh@477 -- # '[' -n 1658605 ']' 00:12:22.189 05:27:25 -- nvmf/common.sh@478 -- # killprocess 1658605 00:12:22.189 05:27:25 -- common/autotest_common.sh@936 -- # '[' -z 1658605 ']' 00:12:22.189 05:27:25 -- common/autotest_common.sh@940 -- # kill -0 1658605 00:12:22.189 05:27:25 -- common/autotest_common.sh@941 -- # uname 00:12:22.189 05:27:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:22.189 05:27:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1658605 00:12:22.189 05:27:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:22.189 05:27:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:22.189 05:27:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1658605' 00:12:22.189 killing process with pid 1658605 00:12:22.189 05:27:25 -- common/autotest_common.sh@955 -- # kill 1658605 00:12:22.189 05:27:25 -- common/autotest_common.sh@960 -- # wait 1658605 00:12:22.189 05:27:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:22.189 05:27:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:22.189 05:27:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:22.189 05:27:25 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:22.189 05:27:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:22.189 05:27:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.189 05:27:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:22.189 05:27:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.733 05:27:27 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:24.733 00:12:24.733 real 4m6.355s 00:12:24.733 user 15m36.249s 00:12:24.733 sys 0m26.962s 00:12:24.733 05:27:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:24.733 05:27:27 -- common/autotest_common.sh@10 -- # set +x 00:12:24.733 ************************************ 00:12:24.733 END TEST nvmf_connect_disconnect 00:12:24.733 ************************************ 00:12:24.733 05:27:27 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:24.733 05:27:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:24.733 05:27:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:24.733 05:27:27 -- common/autotest_common.sh@10 -- # set +x 00:12:24.733 ************************************ 00:12:24.733 START TEST nvmf_multitarget 00:12:24.733 ************************************ 00:12:24.733 05:27:27 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:24.733 * Looking for test storage... 00:12:24.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:24.733 05:27:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:24.733 05:27:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:24.733 05:27:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:24.733 05:27:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:24.733 05:27:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:24.733 05:27:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:24.733 05:27:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:24.733 05:27:27 -- scripts/common.sh@335 -- # IFS=.-: 00:12:24.733 05:27:27 -- scripts/common.sh@335 -- # read -ra ver1 00:12:24.733 05:27:27 -- scripts/common.sh@336 -- # IFS=.-: 00:12:24.733 05:27:27 -- scripts/common.sh@336 -- # read -ra ver2 00:12:24.733 05:27:27 -- scripts/common.sh@337 -- # local 'op=<' 00:12:24.733 05:27:27 -- scripts/common.sh@339 -- # ver1_l=2 00:12:24.733 05:27:27 -- scripts/common.sh@340 -- # ver2_l=1 00:12:24.733 05:27:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:24.733 05:27:27 -- scripts/common.sh@343 -- # case "$op" in 00:12:24.733 05:27:27 -- scripts/common.sh@344 -- # : 1 00:12:24.733 05:27:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:24.733 05:27:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:24.733 05:27:27 -- scripts/common.sh@364 -- # decimal 1 00:12:24.733 05:27:27 -- scripts/common.sh@352 -- # local d=1 00:12:24.733 05:27:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:24.733 05:27:27 -- scripts/common.sh@354 -- # echo 1 00:12:24.733 05:27:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:24.733 05:27:27 -- scripts/common.sh@365 -- # decimal 2 00:12:24.733 05:27:27 -- scripts/common.sh@352 -- # local d=2 00:12:24.733 05:27:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:24.733 05:27:27 -- scripts/common.sh@354 -- # echo 2 00:12:24.733 05:27:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:24.733 05:27:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:24.733 05:27:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:24.733 05:27:27 -- scripts/common.sh@367 -- # return 0 00:12:24.733 05:27:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:24.733 05:27:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:24.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.733 --rc genhtml_branch_coverage=1 00:12:24.733 --rc genhtml_function_coverage=1 00:12:24.733 --rc genhtml_legend=1 00:12:24.733 --rc geninfo_all_blocks=1 00:12:24.733 --rc geninfo_unexecuted_blocks=1 00:12:24.733 00:12:24.733 ' 00:12:24.733 05:27:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:24.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.733 --rc genhtml_branch_coverage=1 00:12:24.733 --rc genhtml_function_coverage=1 00:12:24.733 --rc genhtml_legend=1 00:12:24.733 --rc geninfo_all_blocks=1 00:12:24.733 --rc geninfo_unexecuted_blocks=1 00:12:24.733 00:12:24.733 ' 00:12:24.733 05:27:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:24.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.733 --rc genhtml_branch_coverage=1 00:12:24.733 --rc genhtml_function_coverage=1 00:12:24.733 --rc genhtml_legend=1 00:12:24.733 --rc geninfo_all_blocks=1 00:12:24.733 --rc geninfo_unexecuted_blocks=1 00:12:24.733 00:12:24.733 ' 00:12:24.733 05:27:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:24.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.733 --rc genhtml_branch_coverage=1 00:12:24.733 --rc genhtml_function_coverage=1 00:12:24.733 --rc genhtml_legend=1 00:12:24.733 --rc geninfo_all_blocks=1 00:12:24.733 --rc geninfo_unexecuted_blocks=1 00:12:24.733 00:12:24.733 ' 00:12:24.733 05:27:27 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:24.733 05:27:27 -- nvmf/common.sh@7 -- # uname -s 00:12:24.733 05:27:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.733 05:27:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.733 05:27:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.733 05:27:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.733 05:27:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.733 05:27:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.733 05:27:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.733 05:27:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.733 05:27:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.733 05:27:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:24.733 05:27:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:24.733 05:27:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:24.733 05:27:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:24.733 05:27:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:24.733 05:27:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:24.733 05:27:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:24.733 05:27:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.733 05:27:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.734 05:27:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.734 05:27:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.734 05:27:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.734 05:27:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.734 05:27:27 -- paths/export.sh@5 -- # export PATH 00:12:24.734 05:27:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.734 05:27:27 -- nvmf/common.sh@46 -- # : 0 00:12:24.734 05:27:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:24.734 05:27:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:24.734 05:27:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:24.734 05:27:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:24.734 05:27:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:24.734 05:27:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:24.734 05:27:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:24.734 05:27:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:24.734 05:27:27 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:24.734 05:27:27 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:24.734 05:27:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:24.734 05:27:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:24.734 05:27:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:24.734 05:27:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:24.734 05:27:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:24.734 05:27:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.734 05:27:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:24.734 05:27:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.734 05:27:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:24.734 05:27:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:24.734 05:27:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:24.734 05:27:27 -- common/autotest_common.sh@10 -- # set +x 00:12:32.882 05:27:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:32.882 05:27:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:32.882 05:27:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:32.882 05:27:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:32.882 05:27:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:32.882 05:27:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:32.882 05:27:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:32.882 05:27:34 -- nvmf/common.sh@294 -- # net_devs=() 00:12:32.882 05:27:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:32.882 05:27:34 -- nvmf/common.sh@295 -- # e810=() 00:12:32.882 05:27:34 -- nvmf/common.sh@295 -- # local -ga e810 00:12:32.882 05:27:34 -- nvmf/common.sh@296 -- # x722=() 00:12:32.882 05:27:34 -- nvmf/common.sh@296 -- # local -ga x722 00:12:32.882 05:27:34 -- nvmf/common.sh@297 -- # mlx=() 00:12:32.882 05:27:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:32.882 05:27:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:32.882 05:27:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:32.882 05:27:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:32.882 05:27:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:32.882 05:27:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:32.882 05:27:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:32.882 05:27:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:32.882 05:27:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:32.882 05:27:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:32.882 05:27:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:32.882 05:27:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:32.882 05:27:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:32.882 05:27:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:32.882 05:27:34 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:32.882 05:27:34 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:32.882 05:27:34 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:32.882 05:27:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:32.882 05:27:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:32.882 05:27:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:32.882 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:32.882 05:27:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:32.882 05:27:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:32.882 05:27:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.882 05:27:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.882 05:27:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:32.882 05:27:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:32.882 05:27:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:32.882 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:32.882 05:27:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:32.882 05:27:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:32.882 05:27:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.882 05:27:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.882 05:27:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:32.882 05:27:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:32.882 05:27:34 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:32.882 05:27:34 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:32.882 05:27:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:32.882 05:27:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.882 05:27:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:32.882 05:27:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.882 05:27:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:32.882 Found net devices under 0000:31:00.0: cvl_0_0 00:12:32.882 05:27:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.882 05:27:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:32.882 05:27:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.882 05:27:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:32.882 05:27:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.882 05:27:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:32.882 Found net devices under 0000:31:00.1: cvl_0_1 00:12:32.882 05:27:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.882 05:27:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:32.882 05:27:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:32.882 05:27:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:32.882 05:27:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:32.882 05:27:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:32.882 05:27:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:32.882 05:27:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:32.882 05:27:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:32.882 05:27:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:32.882 05:27:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:32.882 05:27:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:32.882 05:27:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:32.882 05:27:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:32.882 05:27:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:32.882 05:27:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:32.882 05:27:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:32.882 05:27:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:32.882 05:27:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:32.882 05:27:34 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:32.882 05:27:34 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:32.882 05:27:34 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:32.882 05:27:34 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:32.882 05:27:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:32.882 05:27:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:32.882 05:27:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:32.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.552 ms 00:12:32.882 00:12:32.882 --- 10.0.0.2 ping statistics --- 00:12:32.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.882 rtt min/avg/max/mdev = 0.552/0.552/0.552/0.000 ms 00:12:32.882 05:27:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:32.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:32.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:12:32.882 00:12:32.882 --- 10.0.0.1 ping statistics --- 00:12:32.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.882 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:12:32.882 05:27:35 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:32.882 05:27:35 -- nvmf/common.sh@410 -- # return 0 00:12:32.882 05:27:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:32.882 05:27:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:32.882 05:27:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:32.882 05:27:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:32.882 05:27:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:32.882 05:27:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:32.882 05:27:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:32.882 05:27:35 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:32.882 05:27:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:32.882 05:27:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:32.882 05:27:35 -- common/autotest_common.sh@10 -- # set +x 00:12:32.882 05:27:35 -- nvmf/common.sh@469 -- # nvmfpid=1712132 00:12:32.882 05:27:35 -- nvmf/common.sh@470 -- # waitforlisten 1712132 00:12:32.882 05:27:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:32.882 05:27:35 -- common/autotest_common.sh@829 -- # '[' -z 1712132 ']' 00:12:32.882 05:27:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.882 05:27:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:32.882 05:27:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.882 05:27:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:32.882 05:27:35 -- common/autotest_common.sh@10 -- # set +x 00:12:32.882 [2024-12-07 05:27:35.109364] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:32.882 [2024-12-07 05:27:35.109426] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.882 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.882 [2024-12-07 05:27:35.182443] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:32.882 [2024-12-07 05:27:35.254945] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:32.882 [2024-12-07 05:27:35.255086] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.882 [2024-12-07 05:27:35.255097] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.882 [2024-12-07 05:27:35.255106] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.882 [2024-12-07 05:27:35.255343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.882 [2024-12-07 05:27:35.255460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.882 [2024-12-07 05:27:35.255619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.882 [2024-12-07 05:27:35.255620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.882 05:27:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:32.882 05:27:35 -- common/autotest_common.sh@862 -- # return 0 00:12:32.882 05:27:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:32.882 05:27:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:32.882 05:27:35 -- common/autotest_common.sh@10 -- # set +x 00:12:32.882 05:27:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.882 05:27:35 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:32.882 05:27:35 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:32.883 05:27:35 -- target/multitarget.sh@21 -- # jq length 00:12:32.883 05:27:36 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:32.883 05:27:36 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:33.141 "nvmf_tgt_1" 00:12:33.141 05:27:36 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:33.141 "nvmf_tgt_2" 00:12:33.141 05:27:36 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:33.141 05:27:36 -- target/multitarget.sh@28 -- # jq length 00:12:33.142 05:27:36 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:33.142 05:27:36 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:33.401 true 00:12:33.401 05:27:36 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:33.401 true 00:12:33.402 05:27:36 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:33.402 05:27:36 -- target/multitarget.sh@35 -- # jq length 00:12:33.662 05:27:36 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:33.662 05:27:36 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:33.662 05:27:36 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:33.662 05:27:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:33.662 05:27:36 -- nvmf/common.sh@116 -- # sync 00:12:33.662 05:27:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:33.662 05:27:36 -- nvmf/common.sh@119 -- # set +e 00:12:33.662 05:27:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:33.662 05:27:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:33.662 rmmod nvme_tcp 00:12:33.662 rmmod nvme_fabrics 00:12:33.662 rmmod nvme_keyring 00:12:33.662 05:27:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:33.662 05:27:36 -- nvmf/common.sh@123 -- # set -e 00:12:33.662 05:27:36 -- nvmf/common.sh@124 -- # return 0 00:12:33.662 05:27:36 -- nvmf/common.sh@477 -- # '[' -n 1712132 ']' 00:12:33.662 05:27:36 -- nvmf/common.sh@478 -- # killprocess 1712132 00:12:33.662 05:27:36 -- common/autotest_common.sh@936 -- # '[' -z 1712132 ']' 00:12:33.662 05:27:36 -- common/autotest_common.sh@940 -- # kill -0 1712132 00:12:33.662 05:27:36 -- common/autotest_common.sh@941 -- # uname 00:12:33.662 05:27:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:33.662 05:27:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1712132 00:12:33.662 05:27:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:33.662 05:27:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:33.662 05:27:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1712132' 00:12:33.662 killing process with pid 1712132 00:12:33.662 05:27:36 -- common/autotest_common.sh@955 -- # kill 1712132 00:12:33.662 05:27:36 -- common/autotest_common.sh@960 -- # wait 1712132 00:12:33.923 05:27:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:33.923 05:27:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:33.923 05:27:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:33.923 05:27:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:33.923 05:27:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:33.923 05:27:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.923 05:27:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:33.923 05:27:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.835 05:27:38 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:35.835 00:12:35.835 real 0m11.605s 00:12:35.835 user 0m9.511s 00:12:35.835 sys 0m6.042s 00:12:35.835 05:27:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:35.835 05:27:38 -- common/autotest_common.sh@10 -- # set +x 00:12:35.835 ************************************ 00:12:35.835 END TEST nvmf_multitarget 00:12:35.835 ************************************ 00:12:35.835 05:27:39 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:35.835 05:27:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:35.835 05:27:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:35.835 05:27:39 -- common/autotest_common.sh@10 -- # set +x 00:12:35.835 ************************************ 00:12:35.835 START TEST nvmf_rpc 00:12:35.835 ************************************ 00:12:35.835 05:27:39 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:36.095 * Looking for test storage... 00:12:36.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:36.095 05:27:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:36.095 05:27:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:36.095 05:27:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:36.095 05:27:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:36.095 05:27:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:36.095 05:27:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:36.095 05:27:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:36.095 05:27:39 -- scripts/common.sh@335 -- # IFS=.-: 00:12:36.095 05:27:39 -- scripts/common.sh@335 -- # read -ra ver1 00:12:36.095 05:27:39 -- scripts/common.sh@336 -- # IFS=.-: 00:12:36.095 05:27:39 -- scripts/common.sh@336 -- # read -ra ver2 00:12:36.095 05:27:39 -- scripts/common.sh@337 -- # local 'op=<' 00:12:36.095 05:27:39 -- scripts/common.sh@339 -- # ver1_l=2 00:12:36.095 05:27:39 -- scripts/common.sh@340 -- # ver2_l=1 00:12:36.095 05:27:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:36.095 05:27:39 -- scripts/common.sh@343 -- # case "$op" in 00:12:36.095 05:27:39 -- scripts/common.sh@344 -- # : 1 00:12:36.095 05:27:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:36.095 05:27:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:36.095 05:27:39 -- scripts/common.sh@364 -- # decimal 1 00:12:36.095 05:27:39 -- scripts/common.sh@352 -- # local d=1 00:12:36.095 05:27:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:36.095 05:27:39 -- scripts/common.sh@354 -- # echo 1 00:12:36.095 05:27:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:36.095 05:27:39 -- scripts/common.sh@365 -- # decimal 2 00:12:36.095 05:27:39 -- scripts/common.sh@352 -- # local d=2 00:12:36.095 05:27:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:36.095 05:27:39 -- scripts/common.sh@354 -- # echo 2 00:12:36.095 05:27:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:36.095 05:27:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:36.095 05:27:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:36.095 05:27:39 -- scripts/common.sh@367 -- # return 0 00:12:36.095 05:27:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:36.095 05:27:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:36.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.095 --rc genhtml_branch_coverage=1 00:12:36.095 --rc genhtml_function_coverage=1 00:12:36.095 --rc genhtml_legend=1 00:12:36.095 --rc geninfo_all_blocks=1 00:12:36.095 --rc geninfo_unexecuted_blocks=1 00:12:36.095 00:12:36.095 ' 00:12:36.095 05:27:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:36.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.095 --rc genhtml_branch_coverage=1 00:12:36.095 --rc genhtml_function_coverage=1 00:12:36.095 --rc genhtml_legend=1 00:12:36.095 --rc geninfo_all_blocks=1 00:12:36.095 --rc geninfo_unexecuted_blocks=1 00:12:36.095 00:12:36.095 ' 00:12:36.095 05:27:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:36.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.095 --rc genhtml_branch_coverage=1 00:12:36.095 --rc genhtml_function_coverage=1 00:12:36.095 --rc genhtml_legend=1 00:12:36.095 --rc geninfo_all_blocks=1 00:12:36.095 --rc geninfo_unexecuted_blocks=1 00:12:36.095 00:12:36.095 ' 00:12:36.095 05:27:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:36.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.095 --rc genhtml_branch_coverage=1 00:12:36.095 --rc genhtml_function_coverage=1 00:12:36.095 --rc genhtml_legend=1 00:12:36.095 --rc geninfo_all_blocks=1 00:12:36.095 --rc geninfo_unexecuted_blocks=1 00:12:36.095 00:12:36.095 ' 00:12:36.095 05:27:39 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:36.095 05:27:39 -- nvmf/common.sh@7 -- # uname -s 00:12:36.095 05:27:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.095 05:27:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.095 05:27:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.095 05:27:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.095 05:27:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.095 05:27:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.096 05:27:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.096 05:27:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.096 05:27:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.096 05:27:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.096 05:27:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:36.096 05:27:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:36.096 05:27:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.096 05:27:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.096 05:27:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:36.096 05:27:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:36.096 05:27:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.096 05:27:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.096 05:27:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.096 05:27:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.096 05:27:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.096 05:27:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.096 05:27:39 -- paths/export.sh@5 -- # export PATH 00:12:36.096 05:27:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.096 05:27:39 -- nvmf/common.sh@46 -- # : 0 00:12:36.096 05:27:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:36.096 05:27:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:36.096 05:27:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:36.096 05:27:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.096 05:27:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.096 05:27:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:36.096 05:27:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:36.096 05:27:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:36.096 05:27:39 -- target/rpc.sh@11 -- # loops=5 00:12:36.096 05:27:39 -- target/rpc.sh@23 -- # nvmftestinit 00:12:36.096 05:27:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:36.096 05:27:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.096 05:27:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:36.096 05:27:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:36.096 05:27:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:36.096 05:27:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.096 05:27:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:36.096 05:27:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.096 05:27:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:36.096 05:27:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:36.096 05:27:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:36.096 05:27:39 -- common/autotest_common.sh@10 -- # set +x 00:12:44.254 05:27:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:44.254 05:27:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:44.254 05:27:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:44.254 05:27:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:44.254 05:27:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:44.254 05:27:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:44.254 05:27:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:44.254 05:27:46 -- nvmf/common.sh@294 -- # net_devs=() 00:12:44.254 05:27:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:44.254 05:27:46 -- nvmf/common.sh@295 -- # e810=() 00:12:44.254 05:27:46 -- nvmf/common.sh@295 -- # local -ga e810 00:12:44.254 05:27:46 -- nvmf/common.sh@296 -- # x722=() 00:12:44.254 05:27:46 -- nvmf/common.sh@296 -- # local -ga x722 00:12:44.254 05:27:46 -- nvmf/common.sh@297 -- # mlx=() 00:12:44.254 05:27:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:44.254 05:27:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:44.254 05:27:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:44.254 05:27:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:44.254 05:27:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:44.254 05:27:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:44.254 05:27:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:44.254 05:27:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:44.254 05:27:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:44.254 05:27:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:44.255 05:27:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:44.255 05:27:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:44.255 05:27:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:44.255 05:27:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:44.255 05:27:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:44.255 05:27:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:44.255 05:27:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:44.255 05:27:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:44.255 05:27:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:44.255 05:27:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:44.255 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:44.255 05:27:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:44.255 05:27:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:44.255 05:27:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.255 05:27:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.255 05:27:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:44.255 05:27:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:44.255 05:27:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:44.255 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:44.255 05:27:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:44.255 05:27:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:44.255 05:27:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.255 05:27:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.255 05:27:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:44.255 05:27:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:44.255 05:27:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:44.255 05:27:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:44.255 05:27:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:44.255 05:27:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.255 05:27:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:44.255 05:27:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.255 05:27:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:44.255 Found net devices under 0000:31:00.0: cvl_0_0 00:12:44.255 05:27:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.255 05:27:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:44.255 05:27:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.255 05:27:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:44.255 05:27:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.255 05:27:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:44.255 Found net devices under 0000:31:00.1: cvl_0_1 00:12:44.255 05:27:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.255 05:27:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:44.255 05:27:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:44.255 05:27:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:44.255 05:27:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:44.255 05:27:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:44.255 05:27:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:44.255 05:27:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:44.255 05:27:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:44.255 05:27:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:44.255 05:27:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:44.255 05:27:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:44.255 05:27:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:44.255 05:27:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:44.255 05:27:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:44.255 05:27:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:44.255 05:27:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:44.255 05:27:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:44.255 05:27:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:44.255 05:27:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:44.255 05:27:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:44.255 05:27:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:44.255 05:27:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:44.255 05:27:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:44.255 05:27:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:44.255 05:27:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:44.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:44.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:12:44.255 00:12:44.255 --- 10.0.0.2 ping statistics --- 00:12:44.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.255 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:12:44.255 05:27:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:44.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:44.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:12:44.255 00:12:44.255 --- 10.0.0.1 ping statistics --- 00:12:44.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.255 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:12:44.255 05:27:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:44.255 05:27:46 -- nvmf/common.sh@410 -- # return 0 00:12:44.255 05:27:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:44.255 05:27:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:44.255 05:27:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:44.255 05:27:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:44.255 05:27:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:44.255 05:27:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:44.255 05:27:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:44.255 05:27:46 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:44.255 05:27:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:44.255 05:27:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:44.255 05:27:46 -- common/autotest_common.sh@10 -- # set +x 00:12:44.255 05:27:46 -- nvmf/common.sh@469 -- # nvmfpid=1716910 00:12:44.255 05:27:46 -- nvmf/common.sh@470 -- # waitforlisten 1716910 00:12:44.255 05:27:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:44.255 05:27:46 -- common/autotest_common.sh@829 -- # '[' -z 1716910 ']' 00:12:44.255 05:27:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.255 05:27:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:44.255 05:27:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.255 05:27:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:44.255 05:27:46 -- common/autotest_common.sh@10 -- # set +x 00:12:44.255 [2024-12-07 05:27:46.880301] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:44.255 [2024-12-07 05:27:46.880365] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:44.255 EAL: No free 2048 kB hugepages reported on node 1 00:12:44.255 [2024-12-07 05:27:46.953244] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:44.255 [2024-12-07 05:27:47.025483] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:44.255 [2024-12-07 05:27:47.025619] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:44.255 [2024-12-07 05:27:47.025630] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:44.255 [2024-12-07 05:27:47.025644] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:44.255 [2024-12-07 05:27:47.025788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.255 [2024-12-07 05:27:47.025904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:44.255 [2024-12-07 05:27:47.026063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:44.255 [2024-12-07 05:27:47.026073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.516 05:27:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:44.516 05:27:47 -- common/autotest_common.sh@862 -- # return 0 00:12:44.516 05:27:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:44.516 05:27:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:44.516 05:27:47 -- common/autotest_common.sh@10 -- # set +x 00:12:44.516 05:27:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:44.516 05:27:47 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:44.516 05:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.516 05:27:47 -- common/autotest_common.sh@10 -- # set +x 00:12:44.516 05:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.516 05:27:47 -- target/rpc.sh@26 -- # stats='{ 00:12:44.516 "tick_rate": 2400000000, 00:12:44.516 "poll_groups": [ 00:12:44.516 { 00:12:44.516 "name": "nvmf_tgt_poll_group_0", 00:12:44.516 "admin_qpairs": 0, 00:12:44.516 "io_qpairs": 0, 00:12:44.516 "current_admin_qpairs": 0, 00:12:44.516 "current_io_qpairs": 0, 00:12:44.516 "pending_bdev_io": 0, 00:12:44.516 "completed_nvme_io": 0, 00:12:44.516 "transports": [] 00:12:44.516 }, 00:12:44.516 { 00:12:44.516 "name": "nvmf_tgt_poll_group_1", 00:12:44.516 "admin_qpairs": 0, 00:12:44.516 "io_qpairs": 0, 00:12:44.516 "current_admin_qpairs": 0, 00:12:44.516 "current_io_qpairs": 0, 00:12:44.516 "pending_bdev_io": 0, 00:12:44.516 "completed_nvme_io": 0, 00:12:44.516 "transports": [] 00:12:44.516 }, 00:12:44.516 { 00:12:44.516 "name": "nvmf_tgt_poll_group_2", 00:12:44.516 "admin_qpairs": 0, 00:12:44.516 "io_qpairs": 0, 00:12:44.516 "current_admin_qpairs": 0, 00:12:44.516 "current_io_qpairs": 0, 00:12:44.516 "pending_bdev_io": 0, 00:12:44.516 "completed_nvme_io": 0, 00:12:44.516 "transports": [] 00:12:44.516 }, 00:12:44.516 { 00:12:44.516 "name": "nvmf_tgt_poll_group_3", 00:12:44.516 "admin_qpairs": 0, 00:12:44.516 "io_qpairs": 0, 00:12:44.516 "current_admin_qpairs": 0, 00:12:44.516 "current_io_qpairs": 0, 00:12:44.516 "pending_bdev_io": 0, 00:12:44.516 "completed_nvme_io": 0, 00:12:44.516 "transports": [] 00:12:44.516 } 00:12:44.516 ] 00:12:44.516 }' 00:12:44.516 05:27:47 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:44.516 05:27:47 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:44.516 05:27:47 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:44.516 05:27:47 -- target/rpc.sh@15 -- # wc -l 00:12:44.777 05:27:47 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:44.777 05:27:47 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:44.777 05:27:47 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:44.777 05:27:47 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:44.777 05:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.777 05:27:47 -- common/autotest_common.sh@10 -- # set +x 00:12:44.777 [2024-12-07 05:27:47.839664] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:44.777 05:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.777 05:27:47 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:44.777 05:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.777 05:27:47 -- common/autotest_common.sh@10 -- # set +x 00:12:44.777 05:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.777 05:27:47 -- target/rpc.sh@33 -- # stats='{ 00:12:44.777 "tick_rate": 2400000000, 00:12:44.777 "poll_groups": [ 00:12:44.777 { 00:12:44.777 "name": "nvmf_tgt_poll_group_0", 00:12:44.777 "admin_qpairs": 0, 00:12:44.777 "io_qpairs": 0, 00:12:44.777 "current_admin_qpairs": 0, 00:12:44.777 "current_io_qpairs": 0, 00:12:44.777 "pending_bdev_io": 0, 00:12:44.777 "completed_nvme_io": 0, 00:12:44.777 "transports": [ 00:12:44.777 { 00:12:44.777 "trtype": "TCP" 00:12:44.777 } 00:12:44.777 ] 00:12:44.777 }, 00:12:44.777 { 00:12:44.777 "name": "nvmf_tgt_poll_group_1", 00:12:44.777 "admin_qpairs": 0, 00:12:44.777 "io_qpairs": 0, 00:12:44.777 "current_admin_qpairs": 0, 00:12:44.777 "current_io_qpairs": 0, 00:12:44.777 "pending_bdev_io": 0, 00:12:44.777 "completed_nvme_io": 0, 00:12:44.777 "transports": [ 00:12:44.777 { 00:12:44.777 "trtype": "TCP" 00:12:44.777 } 00:12:44.777 ] 00:12:44.777 }, 00:12:44.777 { 00:12:44.777 "name": "nvmf_tgt_poll_group_2", 00:12:44.777 "admin_qpairs": 0, 00:12:44.777 "io_qpairs": 0, 00:12:44.777 "current_admin_qpairs": 0, 00:12:44.777 "current_io_qpairs": 0, 00:12:44.777 "pending_bdev_io": 0, 00:12:44.777 "completed_nvme_io": 0, 00:12:44.777 "transports": [ 00:12:44.777 { 00:12:44.777 "trtype": "TCP" 00:12:44.777 } 00:12:44.777 ] 00:12:44.777 }, 00:12:44.777 { 00:12:44.777 "name": "nvmf_tgt_poll_group_3", 00:12:44.777 "admin_qpairs": 0, 00:12:44.777 "io_qpairs": 0, 00:12:44.777 "current_admin_qpairs": 0, 00:12:44.777 "current_io_qpairs": 0, 00:12:44.777 "pending_bdev_io": 0, 00:12:44.777 "completed_nvme_io": 0, 00:12:44.777 "transports": [ 00:12:44.777 { 00:12:44.777 "trtype": "TCP" 00:12:44.777 } 00:12:44.777 ] 00:12:44.777 } 00:12:44.777 ] 00:12:44.777 }' 00:12:44.777 05:27:47 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:44.777 05:27:47 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:44.777 05:27:47 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:44.777 05:27:47 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:44.777 05:27:47 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:44.777 05:27:47 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:44.777 05:27:47 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:44.777 05:27:47 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:44.777 05:27:47 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:44.777 05:27:47 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:44.777 05:27:47 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:44.777 05:27:47 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:44.777 05:27:47 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:44.777 05:27:47 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:44.777 05:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.777 05:27:47 -- common/autotest_common.sh@10 -- # set +x 00:12:44.777 Malloc1 00:12:44.777 05:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.777 05:27:47 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:44.777 05:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.777 05:27:47 -- common/autotest_common.sh@10 -- # set +x 00:12:44.777 05:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.777 05:27:47 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:44.777 05:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.777 05:27:47 -- common/autotest_common.sh@10 -- # set +x 00:12:44.777 05:27:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.778 05:27:48 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:44.778 05:27:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.778 05:27:48 -- common/autotest_common.sh@10 -- # set +x 00:12:45.038 05:27:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.038 05:27:48 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.038 05:27:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.038 05:27:48 -- common/autotest_common.sh@10 -- # set +x 00:12:45.038 [2024-12-07 05:27:48.031585] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.038 05:27:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.038 05:27:48 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:45.038 05:27:48 -- common/autotest_common.sh@650 -- # local es=0 00:12:45.039 05:27:48 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:45.039 05:27:48 -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:45.039 05:27:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:45.039 05:27:48 -- common/autotest_common.sh@642 -- # type -t nvme 00:12:45.039 05:27:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:45.039 05:27:48 -- common/autotest_common.sh@644 -- # type -P nvme 00:12:45.039 05:27:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:45.039 05:27:48 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:45.039 05:27:48 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:45.039 05:27:48 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:45.039 [2024-12-07 05:27:48.068319] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:12:45.039 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:45.039 could not add new controller: failed to write to nvme-fabrics device 00:12:45.039 05:27:48 -- common/autotest_common.sh@653 -- # es=1 00:12:45.039 05:27:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:45.039 05:27:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:45.039 05:27:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:45.039 05:27:48 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:45.039 05:27:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.039 05:27:48 -- common/autotest_common.sh@10 -- # set +x 00:12:45.039 05:27:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.039 05:27:48 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.425 05:27:49 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:46.425 05:27:49 -- common/autotest_common.sh@1187 -- # local i=0 00:12:46.425 05:27:49 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:46.425 05:27:49 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:46.425 05:27:49 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:48.972 05:27:51 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:48.972 05:27:51 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:48.972 05:27:51 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:48.973 05:27:51 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:48.973 05:27:51 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:48.973 05:27:51 -- common/autotest_common.sh@1197 -- # return 0 00:12:48.973 05:27:51 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:48.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.973 05:27:51 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:48.973 05:27:51 -- common/autotest_common.sh@1208 -- # local i=0 00:12:48.973 05:27:51 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:48.973 05:27:51 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.973 05:27:51 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:48.973 05:27:51 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.973 05:27:51 -- common/autotest_common.sh@1220 -- # return 0 00:12:48.973 05:27:51 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:48.973 05:27:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.973 05:27:51 -- common/autotest_common.sh@10 -- # set +x 00:12:48.973 05:27:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.973 05:27:51 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.973 05:27:51 -- common/autotest_common.sh@650 -- # local es=0 00:12:48.973 05:27:51 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.973 05:27:51 -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:48.973 05:27:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.973 05:27:51 -- common/autotest_common.sh@642 -- # type -t nvme 00:12:48.973 05:27:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.973 05:27:51 -- common/autotest_common.sh@644 -- # type -P nvme 00:12:48.973 05:27:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.973 05:27:51 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:48.973 05:27:51 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:48.973 05:27:51 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.973 [2024-12-07 05:27:51.802063] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:12:48.973 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:48.973 could not add new controller: failed to write to nvme-fabrics device 00:12:48.973 05:27:51 -- common/autotest_common.sh@653 -- # es=1 00:12:48.973 05:27:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:48.973 05:27:51 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:48.973 05:27:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:48.973 05:27:51 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:48.973 05:27:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.973 05:27:51 -- common/autotest_common.sh@10 -- # set +x 00:12:48.973 05:27:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.973 05:27:51 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:50.359 05:27:53 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:50.359 05:27:53 -- common/autotest_common.sh@1187 -- # local i=0 00:12:50.359 05:27:53 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:50.359 05:27:53 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:50.359 05:27:53 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:52.274 05:27:55 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:52.274 05:27:55 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:52.274 05:27:55 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:52.274 05:27:55 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:52.274 05:27:55 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:52.274 05:27:55 -- common/autotest_common.sh@1197 -- # return 0 00:12:52.274 05:27:55 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:52.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.274 05:27:55 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:52.274 05:27:55 -- common/autotest_common.sh@1208 -- # local i=0 00:12:52.274 05:27:55 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:52.274 05:27:55 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.274 05:27:55 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:52.274 05:27:55 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.274 05:27:55 -- common/autotest_common.sh@1220 -- # return 0 00:12:52.274 05:27:55 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.274 05:27:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.533 05:27:55 -- common/autotest_common.sh@10 -- # set +x 00:12:52.533 05:27:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.533 05:27:55 -- target/rpc.sh@81 -- # seq 1 5 00:12:52.533 05:27:55 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:52.533 05:27:55 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.533 05:27:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.533 05:27:55 -- common/autotest_common.sh@10 -- # set +x 00:12:52.533 05:27:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.533 05:27:55 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.533 05:27:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.533 05:27:55 -- common/autotest_common.sh@10 -- # set +x 00:12:52.533 [2024-12-07 05:27:55.545979] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.533 05:27:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.533 05:27:55 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:52.533 05:27:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.533 05:27:55 -- common/autotest_common.sh@10 -- # set +x 00:12:52.533 05:27:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.533 05:27:55 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.533 05:27:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.533 05:27:55 -- common/autotest_common.sh@10 -- # set +x 00:12:52.533 05:27:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.533 05:27:55 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.912 05:27:57 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:53.912 05:27:57 -- common/autotest_common.sh@1187 -- # local i=0 00:12:53.912 05:27:57 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:53.912 05:27:57 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:53.912 05:27:57 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:55.831 05:27:59 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:55.831 05:27:59 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:55.831 05:27:59 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:56.091 05:27:59 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:56.091 05:27:59 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:56.091 05:27:59 -- common/autotest_common.sh@1197 -- # return 0 00:12:56.091 05:27:59 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:56.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.091 05:27:59 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:56.091 05:27:59 -- common/autotest_common.sh@1208 -- # local i=0 00:12:56.091 05:27:59 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:56.091 05:27:59 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.091 05:27:59 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:56.091 05:27:59 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.091 05:27:59 -- common/autotest_common.sh@1220 -- # return 0 00:12:56.091 05:27:59 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:56.091 05:27:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.091 05:27:59 -- common/autotest_common.sh@10 -- # set +x 00:12:56.091 05:27:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.091 05:27:59 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:56.091 05:27:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.091 05:27:59 -- common/autotest_common.sh@10 -- # set +x 00:12:56.091 05:27:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.091 05:27:59 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:56.091 05:27:59 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:56.091 05:27:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.091 05:27:59 -- common/autotest_common.sh@10 -- # set +x 00:12:56.091 05:27:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.091 05:27:59 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:56.091 05:27:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.091 05:27:59 -- common/autotest_common.sh@10 -- # set +x 00:12:56.091 [2024-12-07 05:27:59.273964] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:56.091 05:27:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.091 05:27:59 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:56.091 05:27:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.091 05:27:59 -- common/autotest_common.sh@10 -- # set +x 00:12:56.091 05:27:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.091 05:27:59 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:56.091 05:27:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.091 05:27:59 -- common/autotest_common.sh@10 -- # set +x 00:12:56.091 05:27:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.091 05:27:59 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.023 05:28:00 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:58.023 05:28:00 -- common/autotest_common.sh@1187 -- # local i=0 00:12:58.023 05:28:00 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:58.023 05:28:00 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:58.023 05:28:00 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:59.937 05:28:02 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:59.937 05:28:02 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:59.937 05:28:02 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:59.937 05:28:02 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:59.937 05:28:02 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:59.937 05:28:02 -- common/autotest_common.sh@1197 -- # return 0 00:12:59.937 05:28:02 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:59.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.937 05:28:02 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:59.937 05:28:02 -- common/autotest_common.sh@1208 -- # local i=0 00:12:59.937 05:28:02 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:59.937 05:28:02 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.937 05:28:02 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:59.937 05:28:02 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.937 05:28:02 -- common/autotest_common.sh@1220 -- # return 0 00:12:59.937 05:28:02 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:59.937 05:28:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.937 05:28:02 -- common/autotest_common.sh@10 -- # set +x 00:12:59.937 05:28:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.937 05:28:02 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:59.937 05:28:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.937 05:28:02 -- common/autotest_common.sh@10 -- # set +x 00:12:59.937 05:28:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.937 05:28:03 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:59.937 05:28:03 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:59.937 05:28:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.937 05:28:03 -- common/autotest_common.sh@10 -- # set +x 00:12:59.937 05:28:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.937 05:28:03 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.937 05:28:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.937 05:28:03 -- common/autotest_common.sh@10 -- # set +x 00:12:59.937 [2024-12-07 05:28:03.030405] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.937 05:28:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.937 05:28:03 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:59.937 05:28:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.937 05:28:03 -- common/autotest_common.sh@10 -- # set +x 00:12:59.937 05:28:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.937 05:28:03 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:59.937 05:28:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.937 05:28:03 -- common/autotest_common.sh@10 -- # set +x 00:12:59.937 05:28:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.937 05:28:03 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:01.369 05:28:04 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:01.369 05:28:04 -- common/autotest_common.sh@1187 -- # local i=0 00:13:01.369 05:28:04 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:01.369 05:28:04 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:01.369 05:28:04 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:03.325 05:28:06 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:03.325 05:28:06 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:03.325 05:28:06 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.325 05:28:06 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:03.325 05:28:06 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.325 05:28:06 -- common/autotest_common.sh@1197 -- # return 0 00:13:03.325 05:28:06 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:03.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.586 05:28:06 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:03.586 05:28:06 -- common/autotest_common.sh@1208 -- # local i=0 00:13:03.586 05:28:06 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:03.586 05:28:06 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.586 05:28:06 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:03.586 05:28:06 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.586 05:28:06 -- common/autotest_common.sh@1220 -- # return 0 00:13:03.586 05:28:06 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:03.586 05:28:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.586 05:28:06 -- common/autotest_common.sh@10 -- # set +x 00:13:03.586 05:28:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.586 05:28:06 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:03.586 05:28:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.586 05:28:06 -- common/autotest_common.sh@10 -- # set +x 00:13:03.586 05:28:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.586 05:28:06 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:03.586 05:28:06 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:03.586 05:28:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.586 05:28:06 -- common/autotest_common.sh@10 -- # set +x 00:13:03.586 05:28:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.586 05:28:06 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.586 05:28:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.586 05:28:06 -- common/autotest_common.sh@10 -- # set +x 00:13:03.586 [2024-12-07 05:28:06.801374] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.586 05:28:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.586 05:28:06 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:03.586 05:28:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.586 05:28:06 -- common/autotest_common.sh@10 -- # set +x 00:13:03.586 05:28:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.586 05:28:06 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:03.586 05:28:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.586 05:28:06 -- common/autotest_common.sh@10 -- # set +x 00:13:03.847 05:28:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.847 05:28:06 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:05.230 05:28:08 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:05.230 05:28:08 -- common/autotest_common.sh@1187 -- # local i=0 00:13:05.230 05:28:08 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:05.230 05:28:08 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:05.230 05:28:08 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:07.142 05:28:10 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:07.142 05:28:10 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:07.142 05:28:10 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:13:07.142 05:28:10 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:07.142 05:28:10 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:07.142 05:28:10 -- common/autotest_common.sh@1197 -- # return 0 00:13:07.142 05:28:10 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:07.402 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.402 05:28:10 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:07.402 05:28:10 -- common/autotest_common.sh@1208 -- # local i=0 00:13:07.402 05:28:10 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:07.402 05:28:10 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.402 05:28:10 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:07.402 05:28:10 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.402 05:28:10 -- common/autotest_common.sh@1220 -- # return 0 00:13:07.402 05:28:10 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:07.402 05:28:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.402 05:28:10 -- common/autotest_common.sh@10 -- # set +x 00:13:07.402 05:28:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.402 05:28:10 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.402 05:28:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.402 05:28:10 -- common/autotest_common.sh@10 -- # set +x 00:13:07.402 05:28:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.402 05:28:10 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:07.402 05:28:10 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:07.402 05:28:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.402 05:28:10 -- common/autotest_common.sh@10 -- # set +x 00:13:07.402 05:28:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.402 05:28:10 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.402 05:28:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.402 05:28:10 -- common/autotest_common.sh@10 -- # set +x 00:13:07.402 [2024-12-07 05:28:10.523710] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.402 05:28:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.402 05:28:10 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:07.402 05:28:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.402 05:28:10 -- common/autotest_common.sh@10 -- # set +x 00:13:07.402 05:28:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.402 05:28:10 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:07.402 05:28:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.402 05:28:10 -- common/autotest_common.sh@10 -- # set +x 00:13:07.403 05:28:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.403 05:28:10 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:08.789 05:28:12 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:08.789 05:28:12 -- common/autotest_common.sh@1187 -- # local i=0 00:13:08.789 05:28:12 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:08.789 05:28:12 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:08.789 05:28:12 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:11.329 05:28:14 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:11.329 05:28:14 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:11.329 05:28:14 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:13:11.329 05:28:14 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:11.329 05:28:14 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:11.329 05:28:14 -- common/autotest_common.sh@1197 -- # return 0 00:13:11.329 05:28:14 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:11.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.329 05:28:14 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:11.329 05:28:14 -- common/autotest_common.sh@1208 -- # local i=0 00:13:11.329 05:28:14 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:11.330 05:28:14 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.330 05:28:14 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:11.330 05:28:14 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.330 05:28:14 -- common/autotest_common.sh@1220 -- # return 0 00:13:11.330 05:28:14 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:11.330 05:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.330 05:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:11.330 05:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.330 05:28:14 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.330 05:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.330 05:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:11.330 05:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.330 05:28:14 -- target/rpc.sh@99 -- # seq 1 5 00:13:11.330 05:28:14 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:11.330 05:28:14 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:11.330 05:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.330 05:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:11.330 05:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.330 05:28:14 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.330 05:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.330 05:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:11.330 [2024-12-07 05:28:14.233247] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.330 05:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.330 05:28:14 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:11.330 05:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.330 05:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:11.330 05:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.330 05:28:14 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:11.330 05:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.330 05:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:11.330 05:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.330 05:28:14 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.330 05:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.330 05:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:11.330 05:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.330 05:28:14 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.330 05:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.330 05:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:11.330 05:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.330 05:28:14 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:11.330 05:28:14 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:11.330 05:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.330 05:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:11.330 05:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.330 05:28:14 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.330 05:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.330 05:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:11.330 [2024-12-07 05:28:14.293386] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.330 05:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.330 05:28:14 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:11.330 05:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.330 05:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:11.330 05:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.330 05:28:14 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:11.330 05:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.330 05:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:11.330 05:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.330 05:28:14 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.330 05:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.330 05:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:11.330 05:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.330 05:28:14 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.330 05:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.330 05:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:11.330 05:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.330 05:28:14 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:11.330 05:28:14 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:11.330 05:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.330 05:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:11.330 05:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.330 05:28:14 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.330 05:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.330 05:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:11.330 [2024-12-07 05:28:14.349521] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.330 05:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.330 05:28:14 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:11.330 05:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.330 05:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:11.330 05:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.330 05:28:14 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:11.330 05:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.330 05:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:11.330 05:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.330 05:28:14 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.330 05:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.330 05:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:11.330 05:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.330 05:28:14 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.330 05:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.330 05:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:11.330 05:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.330 05:28:14 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:11.330 05:28:14 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:11.330 05:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.330 05:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:11.330 05:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.330 05:28:14 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.330 05:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.330 05:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:11.330 [2024-12-07 05:28:14.409708] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.330 05:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.330 05:28:14 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:11.330 05:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.330 05:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:11.330 05:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.330 05:28:14 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:11.330 05:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.330 05:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:11.330 05:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.330 05:28:14 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.330 05:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.330 05:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:11.330 05:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.330 05:28:14 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.330 05:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.330 05:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:11.330 05:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.330 05:28:14 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:11.330 05:28:14 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:11.330 05:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.330 05:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:11.330 05:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.330 05:28:14 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.330 05:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.330 05:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:11.330 [2024-12-07 05:28:14.469905] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.330 05:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.330 05:28:14 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:11.330 05:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.330 05:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:11.330 05:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.330 05:28:14 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:11.330 05:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.330 05:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:11.330 05:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.330 05:28:14 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.330 05:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.330 05:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:11.330 05:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.330 05:28:14 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.330 05:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.330 05:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:11.331 05:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.331 05:28:14 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:11.331 05:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.331 05:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:11.331 05:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.331 05:28:14 -- target/rpc.sh@110 -- # stats='{ 00:13:11.331 "tick_rate": 2400000000, 00:13:11.331 "poll_groups": [ 00:13:11.331 { 00:13:11.331 "name": "nvmf_tgt_poll_group_0", 00:13:11.331 "admin_qpairs": 0, 00:13:11.331 "io_qpairs": 224, 00:13:11.331 "current_admin_qpairs": 0, 00:13:11.331 "current_io_qpairs": 0, 00:13:11.331 "pending_bdev_io": 0, 00:13:11.331 "completed_nvme_io": 274, 00:13:11.331 "transports": [ 00:13:11.331 { 00:13:11.331 "trtype": "TCP" 00:13:11.331 } 00:13:11.331 ] 00:13:11.331 }, 00:13:11.331 { 00:13:11.331 "name": "nvmf_tgt_poll_group_1", 00:13:11.331 "admin_qpairs": 1, 00:13:11.331 "io_qpairs": 223, 00:13:11.331 "current_admin_qpairs": 0, 00:13:11.331 "current_io_qpairs": 0, 00:13:11.331 "pending_bdev_io": 0, 00:13:11.331 "completed_nvme_io": 354, 00:13:11.331 "transports": [ 00:13:11.331 { 00:13:11.331 "trtype": "TCP" 00:13:11.331 } 00:13:11.331 ] 00:13:11.331 }, 00:13:11.331 { 00:13:11.331 "name": "nvmf_tgt_poll_group_2", 00:13:11.331 "admin_qpairs": 6, 00:13:11.331 "io_qpairs": 218, 00:13:11.331 "current_admin_qpairs": 0, 00:13:11.331 "current_io_qpairs": 0, 00:13:11.331 "pending_bdev_io": 0, 00:13:11.331 "completed_nvme_io": 385, 00:13:11.331 "transports": [ 00:13:11.331 { 00:13:11.331 "trtype": "TCP" 00:13:11.331 } 00:13:11.331 ] 00:13:11.331 }, 00:13:11.331 { 00:13:11.331 "name": "nvmf_tgt_poll_group_3", 00:13:11.331 "admin_qpairs": 0, 00:13:11.331 "io_qpairs": 224, 00:13:11.331 "current_admin_qpairs": 0, 00:13:11.331 "current_io_qpairs": 0, 00:13:11.331 "pending_bdev_io": 0, 00:13:11.331 "completed_nvme_io": 226, 00:13:11.331 "transports": [ 00:13:11.331 { 00:13:11.331 "trtype": "TCP" 00:13:11.331 } 00:13:11.331 ] 00:13:11.331 } 00:13:11.331 ] 00:13:11.331 }' 00:13:11.331 05:28:14 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:11.331 05:28:14 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:11.331 05:28:14 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:11.331 05:28:14 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:11.590 05:28:14 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:11.590 05:28:14 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:11.590 05:28:14 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:11.590 05:28:14 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:11.590 05:28:14 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:11.590 05:28:14 -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:11.590 05:28:14 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:11.590 05:28:14 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:11.590 05:28:14 -- target/rpc.sh@123 -- # nvmftestfini 00:13:11.590 05:28:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:11.590 05:28:14 -- nvmf/common.sh@116 -- # sync 00:13:11.590 05:28:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:11.590 05:28:14 -- nvmf/common.sh@119 -- # set +e 00:13:11.590 05:28:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:11.590 05:28:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:11.590 rmmod nvme_tcp 00:13:11.590 rmmod nvme_fabrics 00:13:11.590 rmmod nvme_keyring 00:13:11.590 05:28:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:11.590 05:28:14 -- nvmf/common.sh@123 -- # set -e 00:13:11.590 05:28:14 -- nvmf/common.sh@124 -- # return 0 00:13:11.590 05:28:14 -- nvmf/common.sh@477 -- # '[' -n 1716910 ']' 00:13:11.590 05:28:14 -- nvmf/common.sh@478 -- # killprocess 1716910 00:13:11.590 05:28:14 -- common/autotest_common.sh@936 -- # '[' -z 1716910 ']' 00:13:11.590 05:28:14 -- common/autotest_common.sh@940 -- # kill -0 1716910 00:13:11.590 05:28:14 -- common/autotest_common.sh@941 -- # uname 00:13:11.590 05:28:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:11.590 05:28:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1716910 00:13:11.590 05:28:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:11.590 05:28:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:11.590 05:28:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1716910' 00:13:11.590 killing process with pid 1716910 00:13:11.590 05:28:14 -- common/autotest_common.sh@955 -- # kill 1716910 00:13:11.590 05:28:14 -- common/autotest_common.sh@960 -- # wait 1716910 00:13:11.850 05:28:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:11.850 05:28:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:11.850 05:28:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:11.850 05:28:14 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:11.850 05:28:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:11.850 05:28:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.850 05:28:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:11.850 05:28:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.757 05:28:16 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:13.757 00:13:13.757 real 0m37.925s 00:13:13.757 user 1m53.443s 00:13:13.757 sys 0m7.792s 00:13:13.757 05:28:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:13.757 05:28:16 -- common/autotest_common.sh@10 -- # set +x 00:13:13.757 ************************************ 00:13:13.757 END TEST nvmf_rpc 00:13:13.757 ************************************ 00:13:14.019 05:28:17 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:14.019 05:28:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:14.019 05:28:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:14.019 05:28:17 -- common/autotest_common.sh@10 -- # set +x 00:13:14.019 ************************************ 00:13:14.019 START TEST nvmf_invalid 00:13:14.019 ************************************ 00:13:14.019 05:28:17 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:14.019 * Looking for test storage... 00:13:14.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:14.019 05:28:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:14.019 05:28:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:14.019 05:28:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:14.019 05:28:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:14.019 05:28:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:14.019 05:28:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:14.019 05:28:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:14.019 05:28:17 -- scripts/common.sh@335 -- # IFS=.-: 00:13:14.019 05:28:17 -- scripts/common.sh@335 -- # read -ra ver1 00:13:14.019 05:28:17 -- scripts/common.sh@336 -- # IFS=.-: 00:13:14.019 05:28:17 -- scripts/common.sh@336 -- # read -ra ver2 00:13:14.019 05:28:17 -- scripts/common.sh@337 -- # local 'op=<' 00:13:14.019 05:28:17 -- scripts/common.sh@339 -- # ver1_l=2 00:13:14.019 05:28:17 -- scripts/common.sh@340 -- # ver2_l=1 00:13:14.019 05:28:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:14.019 05:28:17 -- scripts/common.sh@343 -- # case "$op" in 00:13:14.019 05:28:17 -- scripts/common.sh@344 -- # : 1 00:13:14.019 05:28:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:14.019 05:28:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:14.019 05:28:17 -- scripts/common.sh@364 -- # decimal 1 00:13:14.019 05:28:17 -- scripts/common.sh@352 -- # local d=1 00:13:14.019 05:28:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:14.019 05:28:17 -- scripts/common.sh@354 -- # echo 1 00:13:14.019 05:28:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:14.019 05:28:17 -- scripts/common.sh@365 -- # decimal 2 00:13:14.019 05:28:17 -- scripts/common.sh@352 -- # local d=2 00:13:14.019 05:28:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:14.019 05:28:17 -- scripts/common.sh@354 -- # echo 2 00:13:14.019 05:28:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:14.019 05:28:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:14.019 05:28:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:14.019 05:28:17 -- scripts/common.sh@367 -- # return 0 00:13:14.019 05:28:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:14.019 05:28:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:14.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.019 --rc genhtml_branch_coverage=1 00:13:14.019 --rc genhtml_function_coverage=1 00:13:14.019 --rc genhtml_legend=1 00:13:14.019 --rc geninfo_all_blocks=1 00:13:14.019 --rc geninfo_unexecuted_blocks=1 00:13:14.019 00:13:14.019 ' 00:13:14.019 05:28:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:14.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.019 --rc genhtml_branch_coverage=1 00:13:14.019 --rc genhtml_function_coverage=1 00:13:14.019 --rc genhtml_legend=1 00:13:14.019 --rc geninfo_all_blocks=1 00:13:14.019 --rc geninfo_unexecuted_blocks=1 00:13:14.019 00:13:14.019 ' 00:13:14.019 05:28:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:14.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.019 --rc genhtml_branch_coverage=1 00:13:14.019 --rc genhtml_function_coverage=1 00:13:14.019 --rc genhtml_legend=1 00:13:14.019 --rc geninfo_all_blocks=1 00:13:14.019 --rc geninfo_unexecuted_blocks=1 00:13:14.019 00:13:14.019 ' 00:13:14.019 05:28:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:14.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.019 --rc genhtml_branch_coverage=1 00:13:14.019 --rc genhtml_function_coverage=1 00:13:14.019 --rc genhtml_legend=1 00:13:14.019 --rc geninfo_all_blocks=1 00:13:14.019 --rc geninfo_unexecuted_blocks=1 00:13:14.019 00:13:14.019 ' 00:13:14.019 05:28:17 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:14.019 05:28:17 -- nvmf/common.sh@7 -- # uname -s 00:13:14.019 05:28:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.019 05:28:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.019 05:28:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.019 05:28:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.019 05:28:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.019 05:28:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.019 05:28:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.019 05:28:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.019 05:28:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.019 05:28:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.019 05:28:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:14.019 05:28:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:14.019 05:28:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.019 05:28:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.019 05:28:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:14.019 05:28:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:14.019 05:28:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.019 05:28:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.019 05:28:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.019 05:28:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.019 05:28:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.019 05:28:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.019 05:28:17 -- paths/export.sh@5 -- # export PATH 00:13:14.019 05:28:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.019 05:28:17 -- nvmf/common.sh@46 -- # : 0 00:13:14.019 05:28:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:14.019 05:28:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:14.019 05:28:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:14.020 05:28:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.020 05:28:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.020 05:28:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:14.020 05:28:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:14.020 05:28:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:14.020 05:28:17 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:14.020 05:28:17 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:14.020 05:28:17 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:14.020 05:28:17 -- target/invalid.sh@14 -- # target=foobar 00:13:14.020 05:28:17 -- target/invalid.sh@16 -- # RANDOM=0 00:13:14.020 05:28:17 -- target/invalid.sh@34 -- # nvmftestinit 00:13:14.020 05:28:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:14.020 05:28:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:14.020 05:28:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:14.020 05:28:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:14.020 05:28:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:14.020 05:28:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.020 05:28:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:14.020 05:28:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.280 05:28:17 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:14.280 05:28:17 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:14.280 05:28:17 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:14.280 05:28:17 -- common/autotest_common.sh@10 -- # set +x 00:13:22.418 05:28:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:22.418 05:28:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:22.418 05:28:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:22.418 05:28:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:22.418 05:28:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:22.418 05:28:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:22.418 05:28:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:22.418 05:28:24 -- nvmf/common.sh@294 -- # net_devs=() 00:13:22.418 05:28:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:22.418 05:28:24 -- nvmf/common.sh@295 -- # e810=() 00:13:22.418 05:28:24 -- nvmf/common.sh@295 -- # local -ga e810 00:13:22.418 05:28:24 -- nvmf/common.sh@296 -- # x722=() 00:13:22.418 05:28:24 -- nvmf/common.sh@296 -- # local -ga x722 00:13:22.418 05:28:24 -- nvmf/common.sh@297 -- # mlx=() 00:13:22.418 05:28:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:22.418 05:28:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:22.418 05:28:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:22.418 05:28:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:22.418 05:28:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:22.418 05:28:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:22.418 05:28:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:22.418 05:28:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:22.418 05:28:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:22.418 05:28:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:22.418 05:28:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:22.418 05:28:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:22.418 05:28:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:22.418 05:28:24 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:22.418 05:28:24 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:22.418 05:28:24 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:22.418 05:28:24 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:22.418 05:28:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:22.418 05:28:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:22.418 05:28:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:22.418 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:22.418 05:28:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:22.418 05:28:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:22.418 05:28:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.418 05:28:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.418 05:28:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:22.418 05:28:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:22.418 05:28:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:22.418 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:22.418 05:28:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:22.418 05:28:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:22.418 05:28:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.418 05:28:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.418 05:28:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:22.418 05:28:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:22.418 05:28:24 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:22.418 05:28:24 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:22.418 05:28:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:22.418 05:28:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.418 05:28:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:22.418 05:28:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.418 05:28:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:22.418 Found net devices under 0000:31:00.0: cvl_0_0 00:13:22.418 05:28:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.418 05:28:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:22.418 05:28:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.418 05:28:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:22.418 05:28:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.418 05:28:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:22.418 Found net devices under 0000:31:00.1: cvl_0_1 00:13:22.418 05:28:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.418 05:28:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:22.418 05:28:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:22.418 05:28:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:22.418 05:28:24 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:22.418 05:28:24 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:22.418 05:28:24 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:22.418 05:28:24 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:22.418 05:28:24 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:22.418 05:28:24 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:22.418 05:28:24 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:22.418 05:28:24 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:22.418 05:28:24 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:22.418 05:28:24 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:22.418 05:28:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:22.418 05:28:24 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:22.418 05:28:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:22.418 05:28:24 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:22.418 05:28:24 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:22.418 05:28:24 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:22.418 05:28:24 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:22.418 05:28:24 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:22.418 05:28:24 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:22.418 05:28:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:22.418 05:28:24 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:22.418 05:28:24 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:22.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:22.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:13:22.418 00:13:22.418 --- 10.0.0.2 ping statistics --- 00:13:22.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.418 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:13:22.418 05:28:24 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:22.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:22.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:13:22.418 00:13:22.418 --- 10.0.0.1 ping statistics --- 00:13:22.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.418 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:13:22.418 05:28:24 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:22.418 05:28:24 -- nvmf/common.sh@410 -- # return 0 00:13:22.418 05:28:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:22.418 05:28:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:22.418 05:28:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:22.418 05:28:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:22.418 05:28:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:22.418 05:28:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:22.418 05:28:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:22.418 05:28:24 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:22.418 05:28:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:22.418 05:28:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:22.418 05:28:24 -- common/autotest_common.sh@10 -- # set +x 00:13:22.418 05:28:24 -- nvmf/common.sh@469 -- # nvmfpid=1726914 00:13:22.418 05:28:24 -- nvmf/common.sh@470 -- # waitforlisten 1726914 00:13:22.418 05:28:24 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:22.418 05:28:24 -- common/autotest_common.sh@829 -- # '[' -z 1726914 ']' 00:13:22.418 05:28:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.418 05:28:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:22.419 05:28:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.419 05:28:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:22.419 05:28:24 -- common/autotest_common.sh@10 -- # set +x 00:13:22.419 [2024-12-07 05:28:24.953530] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:22.419 [2024-12-07 05:28:24.953577] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.419 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.419 [2024-12-07 05:28:25.021491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:22.419 [2024-12-07 05:28:25.085638] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:22.419 [2024-12-07 05:28:25.085768] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:22.419 [2024-12-07 05:28:25.085779] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:22.419 [2024-12-07 05:28:25.085788] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:22.419 [2024-12-07 05:28:25.085936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.419 [2024-12-07 05:28:25.086053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.419 [2024-12-07 05:28:25.086153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.419 [2024-12-07 05:28:25.086153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:22.678 05:28:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:22.678 05:28:25 -- common/autotest_common.sh@862 -- # return 0 00:13:22.678 05:28:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:22.678 05:28:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:22.678 05:28:25 -- common/autotest_common.sh@10 -- # set +x 00:13:22.678 05:28:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:22.678 05:28:25 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:22.678 05:28:25 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode3367 00:13:22.938 [2024-12-07 05:28:25.924645] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:22.938 05:28:25 -- target/invalid.sh@40 -- # out='request: 00:13:22.938 { 00:13:22.938 "nqn": "nqn.2016-06.io.spdk:cnode3367", 00:13:22.938 "tgt_name": "foobar", 00:13:22.938 "method": "nvmf_create_subsystem", 00:13:22.938 "req_id": 1 00:13:22.938 } 00:13:22.938 Got JSON-RPC error response 00:13:22.938 response: 00:13:22.938 { 00:13:22.938 "code": -32603, 00:13:22.938 "message": "Unable to find target foobar" 00:13:22.938 }' 00:13:22.938 05:28:25 -- target/invalid.sh@41 -- # [[ request: 00:13:22.938 { 00:13:22.938 "nqn": "nqn.2016-06.io.spdk:cnode3367", 00:13:22.938 "tgt_name": "foobar", 00:13:22.938 "method": "nvmf_create_subsystem", 00:13:22.938 "req_id": 1 00:13:22.938 } 00:13:22.938 Got JSON-RPC error response 00:13:22.938 response: 00:13:22.938 { 00:13:22.938 "code": -32603, 00:13:22.938 "message": "Unable to find target foobar" 00:13:22.938 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:22.938 05:28:25 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:22.938 05:28:25 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode29972 00:13:22.938 [2024-12-07 05:28:26.105272] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29972: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:22.938 05:28:26 -- target/invalid.sh@45 -- # out='request: 00:13:22.938 { 00:13:22.938 "nqn": "nqn.2016-06.io.spdk:cnode29972", 00:13:22.938 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:22.938 "method": "nvmf_create_subsystem", 00:13:22.938 "req_id": 1 00:13:22.938 } 00:13:22.938 Got JSON-RPC error response 00:13:22.938 response: 00:13:22.938 { 00:13:22.938 "code": -32602, 00:13:22.938 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:22.938 }' 00:13:22.938 05:28:26 -- target/invalid.sh@46 -- # [[ request: 00:13:22.938 { 00:13:22.938 "nqn": "nqn.2016-06.io.spdk:cnode29972", 00:13:22.938 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:22.938 "method": "nvmf_create_subsystem", 00:13:22.938 "req_id": 1 00:13:22.938 } 00:13:22.938 Got JSON-RPC error response 00:13:22.938 response: 00:13:22.938 { 00:13:22.938 "code": -32602, 00:13:22.938 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:22.938 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:22.938 05:28:26 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:22.938 05:28:26 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode3443 00:13:23.200 [2024-12-07 05:28:26.289797] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3443: invalid model number 'SPDK_Controller' 00:13:23.200 05:28:26 -- target/invalid.sh@50 -- # out='request: 00:13:23.200 { 00:13:23.200 "nqn": "nqn.2016-06.io.spdk:cnode3443", 00:13:23.200 "model_number": "SPDK_Controller\u001f", 00:13:23.200 "method": "nvmf_create_subsystem", 00:13:23.200 "req_id": 1 00:13:23.200 } 00:13:23.200 Got JSON-RPC error response 00:13:23.200 response: 00:13:23.200 { 00:13:23.200 "code": -32602, 00:13:23.200 "message": "Invalid MN SPDK_Controller\u001f" 00:13:23.200 }' 00:13:23.200 05:28:26 -- target/invalid.sh@51 -- # [[ request: 00:13:23.200 { 00:13:23.200 "nqn": "nqn.2016-06.io.spdk:cnode3443", 00:13:23.200 "model_number": "SPDK_Controller\u001f", 00:13:23.200 "method": "nvmf_create_subsystem", 00:13:23.200 "req_id": 1 00:13:23.200 } 00:13:23.200 Got JSON-RPC error response 00:13:23.200 response: 00:13:23.200 { 00:13:23.200 "code": -32602, 00:13:23.200 "message": "Invalid MN SPDK_Controller\u001f" 00:13:23.200 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:23.200 05:28:26 -- target/invalid.sh@54 -- # gen_random_s 21 00:13:23.200 05:28:26 -- target/invalid.sh@19 -- # local length=21 ll 00:13:23.200 05:28:26 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:23.200 05:28:26 -- target/invalid.sh@21 -- # local chars 00:13:23.200 05:28:26 -- target/invalid.sh@22 -- # local string 00:13:23.200 05:28:26 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:23.200 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # printf %x 89 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # string+=Y 00:13:23.200 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.200 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # printf %x 85 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # string+=U 00:13:23.200 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.200 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # printf %x 90 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # string+=Z 00:13:23.200 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.200 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # printf %x 113 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # string+=q 00:13:23.200 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.200 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # printf %x 69 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # string+=E 00:13:23.200 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.200 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # printf %x 47 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # string+=/ 00:13:23.200 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.200 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # printf %x 80 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # string+=P 00:13:23.200 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.200 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # printf %x 35 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # string+='#' 00:13:23.200 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.200 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # printf %x 101 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # string+=e 00:13:23.200 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.200 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # printf %x 127 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # string+=$'\177' 00:13:23.200 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.200 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # printf %x 94 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # string+='^' 00:13:23.200 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.200 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # printf %x 63 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # string+='?' 00:13:23.200 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.200 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # printf %x 38 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # string+='&' 00:13:23.200 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.200 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # printf %x 80 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # string+=P 00:13:23.200 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.200 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # printf %x 124 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:23.200 05:28:26 -- target/invalid.sh@25 -- # string+='|' 00:13:23.200 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.200 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.462 05:28:26 -- target/invalid.sh@25 -- # printf %x 67 00:13:23.462 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:23.462 05:28:26 -- target/invalid.sh@25 -- # string+=C 00:13:23.462 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.462 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.462 05:28:26 -- target/invalid.sh@25 -- # printf %x 107 00:13:23.462 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:23.462 05:28:26 -- target/invalid.sh@25 -- # string+=k 00:13:23.462 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.462 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.462 05:28:26 -- target/invalid.sh@25 -- # printf %x 58 00:13:23.462 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:23.462 05:28:26 -- target/invalid.sh@25 -- # string+=: 00:13:23.462 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.462 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.462 05:28:26 -- target/invalid.sh@25 -- # printf %x 69 00:13:23.462 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:23.462 05:28:26 -- target/invalid.sh@25 -- # string+=E 00:13:23.462 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.462 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.462 05:28:26 -- target/invalid.sh@25 -- # printf %x 125 00:13:23.462 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:23.462 05:28:26 -- target/invalid.sh@25 -- # string+='}' 00:13:23.462 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.462 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.462 05:28:26 -- target/invalid.sh@25 -- # printf %x 92 00:13:23.462 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:23.462 05:28:26 -- target/invalid.sh@25 -- # string+='\' 00:13:23.462 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.462 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.462 05:28:26 -- target/invalid.sh@28 -- # [[ Y == \- ]] 00:13:23.462 05:28:26 -- target/invalid.sh@31 -- # echo 'YUZqE/P#e^?&P|Ck:E}\' 00:13:23.462 05:28:26 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'YUZqE/P#e^?&P|Ck:E}\' nqn.2016-06.io.spdk:cnode1481 00:13:23.462 [2024-12-07 05:28:26.634929] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1481: invalid serial number 'YUZqE/P#e^?&P|Ck:E}\' 00:13:23.462 05:28:26 -- target/invalid.sh@54 -- # out='request: 00:13:23.462 { 00:13:23.462 "nqn": "nqn.2016-06.io.spdk:cnode1481", 00:13:23.462 "serial_number": "YUZqE/P#e\u007f^?&P|Ck:E}\\", 00:13:23.462 "method": "nvmf_create_subsystem", 00:13:23.462 "req_id": 1 00:13:23.462 } 00:13:23.462 Got JSON-RPC error response 00:13:23.462 response: 00:13:23.462 { 00:13:23.462 "code": -32602, 00:13:23.462 "message": "Invalid SN YUZqE/P#e\u007f^?&P|Ck:E}\\" 00:13:23.462 }' 00:13:23.462 05:28:26 -- target/invalid.sh@55 -- # [[ request: 00:13:23.462 { 00:13:23.462 "nqn": "nqn.2016-06.io.spdk:cnode1481", 00:13:23.462 "serial_number": "YUZqE/P#e\u007f^?&P|Ck:E}\\", 00:13:23.462 "method": "nvmf_create_subsystem", 00:13:23.462 "req_id": 1 00:13:23.462 } 00:13:23.462 Got JSON-RPC error response 00:13:23.462 response: 00:13:23.462 { 00:13:23.462 "code": -32602, 00:13:23.462 "message": "Invalid SN YUZqE/P#e\u007f^?&P|Ck:E}\\" 00:13:23.462 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:23.462 05:28:26 -- target/invalid.sh@58 -- # gen_random_s 41 00:13:23.462 05:28:26 -- target/invalid.sh@19 -- # local length=41 ll 00:13:23.462 05:28:26 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:23.462 05:28:26 -- target/invalid.sh@21 -- # local chars 00:13:23.462 05:28:26 -- target/invalid.sh@22 -- # local string 00:13:23.462 05:28:26 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:23.462 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.462 05:28:26 -- target/invalid.sh@25 -- # printf %x 77 00:13:23.462 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:23.462 05:28:26 -- target/invalid.sh@25 -- # string+=M 00:13:23.462 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.462 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.462 05:28:26 -- target/invalid.sh@25 -- # printf %x 82 00:13:23.462 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:23.462 05:28:26 -- target/invalid.sh@25 -- # string+=R 00:13:23.462 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.462 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.462 05:28:26 -- target/invalid.sh@25 -- # printf %x 106 00:13:23.462 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:23.462 05:28:26 -- target/invalid.sh@25 -- # string+=j 00:13:23.462 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.462 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.462 05:28:26 -- target/invalid.sh@25 -- # printf %x 88 00:13:23.462 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:23.724 05:28:26 -- target/invalid.sh@25 -- # string+=X 00:13:23.724 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.724 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.724 05:28:26 -- target/invalid.sh@25 -- # printf %x 45 00:13:23.724 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:23.724 05:28:26 -- target/invalid.sh@25 -- # string+=- 00:13:23.724 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.724 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.724 05:28:26 -- target/invalid.sh@25 -- # printf %x 120 00:13:23.724 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:23.724 05:28:26 -- target/invalid.sh@25 -- # string+=x 00:13:23.724 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.724 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.724 05:28:26 -- target/invalid.sh@25 -- # printf %x 46 00:13:23.724 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:23.724 05:28:26 -- target/invalid.sh@25 -- # string+=. 00:13:23.724 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.724 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.724 05:28:26 -- target/invalid.sh@25 -- # printf %x 93 00:13:23.724 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:23.724 05:28:26 -- target/invalid.sh@25 -- # string+=']' 00:13:23.724 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.724 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.724 05:28:26 -- target/invalid.sh@25 -- # printf %x 111 00:13:23.724 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:23.724 05:28:26 -- target/invalid.sh@25 -- # string+=o 00:13:23.724 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.724 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.724 05:28:26 -- target/invalid.sh@25 -- # printf %x 54 00:13:23.724 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:23.724 05:28:26 -- target/invalid.sh@25 -- # string+=6 00:13:23.724 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.724 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.724 05:28:26 -- target/invalid.sh@25 -- # printf %x 71 00:13:23.724 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:23.724 05:28:26 -- target/invalid.sh@25 -- # string+=G 00:13:23.724 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.724 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.724 05:28:26 -- target/invalid.sh@25 -- # printf %x 72 00:13:23.724 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:23.724 05:28:26 -- target/invalid.sh@25 -- # string+=H 00:13:23.724 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.724 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.724 05:28:26 -- target/invalid.sh@25 -- # printf %x 124 00:13:23.724 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:23.724 05:28:26 -- target/invalid.sh@25 -- # string+='|' 00:13:23.724 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.724 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.724 05:28:26 -- target/invalid.sh@25 -- # printf %x 94 00:13:23.724 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:23.724 05:28:26 -- target/invalid.sh@25 -- # string+='^' 00:13:23.724 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.724 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.724 05:28:26 -- target/invalid.sh@25 -- # printf %x 93 00:13:23.724 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:23.724 05:28:26 -- target/invalid.sh@25 -- # string+=']' 00:13:23.724 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # printf %x 57 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # string+=9 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # printf %x 118 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # string+=v 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # printf %x 122 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # string+=z 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # printf %x 107 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # string+=k 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # printf %x 48 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # string+=0 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # printf %x 87 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # string+=W 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # printf %x 41 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # string+=')' 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # printf %x 75 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # string+=K 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # printf %x 62 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # string+='>' 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # printf %x 37 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # string+=% 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # printf %x 100 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # string+=d 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # printf %x 34 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # string+='"' 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # printf %x 74 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # string+=J 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # printf %x 70 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # string+=F 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # printf %x 91 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # string+='[' 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # printf %x 64 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # string+=@ 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # printf %x 119 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # string+=w 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # printf %x 70 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # string+=F 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # printf %x 119 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # string+=w 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # printf %x 33 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # string+='!' 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # printf %x 36 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # string+='$' 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # printf %x 127 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # string+=$'\177' 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # printf %x 38 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # string+='&' 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # printf %x 39 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:23.725 05:28:26 -- target/invalid.sh@25 -- # string+=\' 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.725 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.986 05:28:26 -- target/invalid.sh@25 -- # printf %x 54 00:13:23.986 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:23.986 05:28:26 -- target/invalid.sh@25 -- # string+=6 00:13:23.986 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.986 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.986 05:28:26 -- target/invalid.sh@25 -- # printf %x 72 00:13:23.986 05:28:26 -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:23.986 05:28:26 -- target/invalid.sh@25 -- # string+=H 00:13:23.986 05:28:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.986 05:28:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.986 05:28:26 -- target/invalid.sh@28 -- # [[ M == \- ]] 00:13:23.986 05:28:26 -- target/invalid.sh@31 -- # echo 'MRjX-x.]o6GH|^]9vzk0W)K>%d"JF[@wFw!$&'\''6H' 00:13:23.986 05:28:26 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'MRjX-x.]o6GH|^]9vzk0W)K>%d"JF[@wFw!$&'\''6H' nqn.2016-06.io.spdk:cnode2834 00:13:23.986 [2024-12-07 05:28:27.120481] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2834: invalid model number 'MRjX-x.]o6GH|^]9vzk0W)K>%d"JF[@wFw!$&'6H' 00:13:23.986 05:28:27 -- target/invalid.sh@58 -- # out='request: 00:13:23.986 { 00:13:23.986 "nqn": "nqn.2016-06.io.spdk:cnode2834", 00:13:23.986 "model_number": "MRjX-x.]o6GH|^]9vzk0W)K>%d\"JF[@wFw!$\u007f&'\''6H", 00:13:23.986 "method": "nvmf_create_subsystem", 00:13:23.986 "req_id": 1 00:13:23.986 } 00:13:23.986 Got JSON-RPC error response 00:13:23.986 response: 00:13:23.986 { 00:13:23.986 "code": -32602, 00:13:23.986 "message": "Invalid MN MRjX-x.]o6GH|^]9vzk0W)K>%d\"JF[@wFw!$\u007f&'\''6H" 00:13:23.986 }' 00:13:23.986 05:28:27 -- target/invalid.sh@59 -- # [[ request: 00:13:23.986 { 00:13:23.986 "nqn": "nqn.2016-06.io.spdk:cnode2834", 00:13:23.986 "model_number": "MRjX-x.]o6GH|^]9vzk0W)K>%d\"JF[@wFw!$\u007f&'6H", 00:13:23.986 "method": "nvmf_create_subsystem", 00:13:23.986 "req_id": 1 00:13:23.986 } 00:13:23.986 Got JSON-RPC error response 00:13:23.986 response: 00:13:23.986 { 00:13:23.986 "code": -32602, 00:13:23.986 "message": "Invalid MN MRjX-x.]o6GH|^]9vzk0W)K>%d\"JF[@wFw!$\u007f&'6H" 00:13:23.986 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:23.986 05:28:27 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:24.247 [2024-12-07 05:28:27.289096] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:24.247 05:28:27 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:24.508 05:28:27 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:24.508 05:28:27 -- target/invalid.sh@67 -- # echo '' 00:13:24.508 05:28:27 -- target/invalid.sh@67 -- # head -n 1 00:13:24.508 05:28:27 -- target/invalid.sh@67 -- # IP= 00:13:24.509 05:28:27 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:24.509 [2024-12-07 05:28:27.651287] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:24.509 05:28:27 -- target/invalid.sh@69 -- # out='request: 00:13:24.509 { 00:13:24.509 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:24.509 "listen_address": { 00:13:24.509 "trtype": "tcp", 00:13:24.509 "traddr": "", 00:13:24.509 "trsvcid": "4421" 00:13:24.509 }, 00:13:24.509 "method": "nvmf_subsystem_remove_listener", 00:13:24.509 "req_id": 1 00:13:24.509 } 00:13:24.509 Got JSON-RPC error response 00:13:24.509 response: 00:13:24.509 { 00:13:24.509 "code": -32602, 00:13:24.509 "message": "Invalid parameters" 00:13:24.509 }' 00:13:24.509 05:28:27 -- target/invalid.sh@70 -- # [[ request: 00:13:24.509 { 00:13:24.509 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:24.509 "listen_address": { 00:13:24.509 "trtype": "tcp", 00:13:24.509 "traddr": "", 00:13:24.509 "trsvcid": "4421" 00:13:24.509 }, 00:13:24.509 "method": "nvmf_subsystem_remove_listener", 00:13:24.509 "req_id": 1 00:13:24.509 } 00:13:24.509 Got JSON-RPC error response 00:13:24.509 response: 00:13:24.509 { 00:13:24.509 "code": -32602, 00:13:24.509 "message": "Invalid parameters" 00:13:24.509 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:24.509 05:28:27 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29477 -i 0 00:13:24.770 [2024-12-07 05:28:27.827824] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29477: invalid cntlid range [0-65519] 00:13:24.770 05:28:27 -- target/invalid.sh@73 -- # out='request: 00:13:24.770 { 00:13:24.770 "nqn": "nqn.2016-06.io.spdk:cnode29477", 00:13:24.770 "min_cntlid": 0, 00:13:24.770 "method": "nvmf_create_subsystem", 00:13:24.770 "req_id": 1 00:13:24.770 } 00:13:24.770 Got JSON-RPC error response 00:13:24.770 response: 00:13:24.770 { 00:13:24.770 "code": -32602, 00:13:24.770 "message": "Invalid cntlid range [0-65519]" 00:13:24.770 }' 00:13:24.770 05:28:27 -- target/invalid.sh@74 -- # [[ request: 00:13:24.770 { 00:13:24.770 "nqn": "nqn.2016-06.io.spdk:cnode29477", 00:13:24.770 "min_cntlid": 0, 00:13:24.770 "method": "nvmf_create_subsystem", 00:13:24.770 "req_id": 1 00:13:24.770 } 00:13:24.770 Got JSON-RPC error response 00:13:24.770 response: 00:13:24.770 { 00:13:24.770 "code": -32602, 00:13:24.770 "message": "Invalid cntlid range [0-65519]" 00:13:24.770 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:24.770 05:28:27 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8037 -i 65520 00:13:24.770 [2024-12-07 05:28:28.008435] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8037: invalid cntlid range [65520-65519] 00:13:25.031 05:28:28 -- target/invalid.sh@75 -- # out='request: 00:13:25.031 { 00:13:25.031 "nqn": "nqn.2016-06.io.spdk:cnode8037", 00:13:25.031 "min_cntlid": 65520, 00:13:25.031 "method": "nvmf_create_subsystem", 00:13:25.031 "req_id": 1 00:13:25.031 } 00:13:25.031 Got JSON-RPC error response 00:13:25.031 response: 00:13:25.031 { 00:13:25.031 "code": -32602, 00:13:25.031 "message": "Invalid cntlid range [65520-65519]" 00:13:25.031 }' 00:13:25.031 05:28:28 -- target/invalid.sh@76 -- # [[ request: 00:13:25.031 { 00:13:25.031 "nqn": "nqn.2016-06.io.spdk:cnode8037", 00:13:25.031 "min_cntlid": 65520, 00:13:25.032 "method": "nvmf_create_subsystem", 00:13:25.032 "req_id": 1 00:13:25.032 } 00:13:25.032 Got JSON-RPC error response 00:13:25.032 response: 00:13:25.032 { 00:13:25.032 "code": -32602, 00:13:25.032 "message": "Invalid cntlid range [65520-65519]" 00:13:25.032 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:25.032 05:28:28 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2413 -I 0 00:13:25.032 [2024-12-07 05:28:28.172980] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2413: invalid cntlid range [1-0] 00:13:25.032 05:28:28 -- target/invalid.sh@77 -- # out='request: 00:13:25.032 { 00:13:25.032 "nqn": "nqn.2016-06.io.spdk:cnode2413", 00:13:25.032 "max_cntlid": 0, 00:13:25.032 "method": "nvmf_create_subsystem", 00:13:25.032 "req_id": 1 00:13:25.032 } 00:13:25.032 Got JSON-RPC error response 00:13:25.032 response: 00:13:25.032 { 00:13:25.032 "code": -32602, 00:13:25.032 "message": "Invalid cntlid range [1-0]" 00:13:25.032 }' 00:13:25.032 05:28:28 -- target/invalid.sh@78 -- # [[ request: 00:13:25.032 { 00:13:25.032 "nqn": "nqn.2016-06.io.spdk:cnode2413", 00:13:25.032 "max_cntlid": 0, 00:13:25.032 "method": "nvmf_create_subsystem", 00:13:25.032 "req_id": 1 00:13:25.032 } 00:13:25.032 Got JSON-RPC error response 00:13:25.032 response: 00:13:25.032 { 00:13:25.032 "code": -32602, 00:13:25.032 "message": "Invalid cntlid range [1-0]" 00:13:25.032 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:25.032 05:28:28 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20247 -I 65520 00:13:25.294 [2024-12-07 05:28:28.349592] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20247: invalid cntlid range [1-65520] 00:13:25.294 05:28:28 -- target/invalid.sh@79 -- # out='request: 00:13:25.294 { 00:13:25.294 "nqn": "nqn.2016-06.io.spdk:cnode20247", 00:13:25.294 "max_cntlid": 65520, 00:13:25.294 "method": "nvmf_create_subsystem", 00:13:25.294 "req_id": 1 00:13:25.294 } 00:13:25.294 Got JSON-RPC error response 00:13:25.294 response: 00:13:25.294 { 00:13:25.294 "code": -32602, 00:13:25.294 "message": "Invalid cntlid range [1-65520]" 00:13:25.294 }' 00:13:25.294 05:28:28 -- target/invalid.sh@80 -- # [[ request: 00:13:25.294 { 00:13:25.294 "nqn": "nqn.2016-06.io.spdk:cnode20247", 00:13:25.294 "max_cntlid": 65520, 00:13:25.294 "method": "nvmf_create_subsystem", 00:13:25.294 "req_id": 1 00:13:25.294 } 00:13:25.294 Got JSON-RPC error response 00:13:25.294 response: 00:13:25.294 { 00:13:25.294 "code": -32602, 00:13:25.294 "message": "Invalid cntlid range [1-65520]" 00:13:25.294 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:25.294 05:28:28 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15747 -i 6 -I 5 00:13:25.294 [2024-12-07 05:28:28.526195] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15747: invalid cntlid range [6-5] 00:13:25.556 05:28:28 -- target/invalid.sh@83 -- # out='request: 00:13:25.556 { 00:13:25.556 "nqn": "nqn.2016-06.io.spdk:cnode15747", 00:13:25.556 "min_cntlid": 6, 00:13:25.556 "max_cntlid": 5, 00:13:25.556 "method": "nvmf_create_subsystem", 00:13:25.556 "req_id": 1 00:13:25.556 } 00:13:25.556 Got JSON-RPC error response 00:13:25.556 response: 00:13:25.556 { 00:13:25.556 "code": -32602, 00:13:25.556 "message": "Invalid cntlid range [6-5]" 00:13:25.556 }' 00:13:25.556 05:28:28 -- target/invalid.sh@84 -- # [[ request: 00:13:25.556 { 00:13:25.556 "nqn": "nqn.2016-06.io.spdk:cnode15747", 00:13:25.556 "min_cntlid": 6, 00:13:25.556 "max_cntlid": 5, 00:13:25.556 "method": "nvmf_create_subsystem", 00:13:25.556 "req_id": 1 00:13:25.556 } 00:13:25.556 Got JSON-RPC error response 00:13:25.556 response: 00:13:25.556 { 00:13:25.556 "code": -32602, 00:13:25.556 "message": "Invalid cntlid range [6-5]" 00:13:25.556 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:25.556 05:28:28 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:25.556 05:28:28 -- target/invalid.sh@87 -- # out='request: 00:13:25.556 { 00:13:25.556 "name": "foobar", 00:13:25.556 "method": "nvmf_delete_target", 00:13:25.556 "req_id": 1 00:13:25.556 } 00:13:25.556 Got JSON-RPC error response 00:13:25.556 response: 00:13:25.556 { 00:13:25.556 "code": -32602, 00:13:25.556 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:25.556 }' 00:13:25.556 05:28:28 -- target/invalid.sh@88 -- # [[ request: 00:13:25.556 { 00:13:25.556 "name": "foobar", 00:13:25.556 "method": "nvmf_delete_target", 00:13:25.556 "req_id": 1 00:13:25.556 } 00:13:25.556 Got JSON-RPC error response 00:13:25.556 response: 00:13:25.556 { 00:13:25.556 "code": -32602, 00:13:25.556 "message": "The specified target doesn't exist, cannot delete it." 00:13:25.556 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:25.556 05:28:28 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:25.556 05:28:28 -- target/invalid.sh@91 -- # nvmftestfini 00:13:25.556 05:28:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:25.556 05:28:28 -- nvmf/common.sh@116 -- # sync 00:13:25.556 05:28:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:25.556 05:28:28 -- nvmf/common.sh@119 -- # set +e 00:13:25.556 05:28:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:25.556 05:28:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:25.556 rmmod nvme_tcp 00:13:25.556 rmmod nvme_fabrics 00:13:25.556 rmmod nvme_keyring 00:13:25.556 05:28:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:25.556 05:28:28 -- nvmf/common.sh@123 -- # set -e 00:13:25.556 05:28:28 -- nvmf/common.sh@124 -- # return 0 00:13:25.556 05:28:28 -- nvmf/common.sh@477 -- # '[' -n 1726914 ']' 00:13:25.556 05:28:28 -- nvmf/common.sh@478 -- # killprocess 1726914 00:13:25.556 05:28:28 -- common/autotest_common.sh@936 -- # '[' -z 1726914 ']' 00:13:25.556 05:28:28 -- common/autotest_common.sh@940 -- # kill -0 1726914 00:13:25.556 05:28:28 -- common/autotest_common.sh@941 -- # uname 00:13:25.556 05:28:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:25.556 05:28:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1726914 00:13:25.819 05:28:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:25.819 05:28:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:25.819 05:28:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1726914' 00:13:25.819 killing process with pid 1726914 00:13:25.819 05:28:28 -- common/autotest_common.sh@955 -- # kill 1726914 00:13:25.819 05:28:28 -- common/autotest_common.sh@960 -- # wait 1726914 00:13:25.819 05:28:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:25.819 05:28:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:25.819 05:28:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:25.819 05:28:28 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:25.819 05:28:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:25.819 05:28:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.819 05:28:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:25.819 05:28:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.371 05:28:31 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:28.371 00:13:28.371 real 0m13.990s 00:13:28.371 user 0m19.769s 00:13:28.371 sys 0m6.657s 00:13:28.371 05:28:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:28.371 05:28:31 -- common/autotest_common.sh@10 -- # set +x 00:13:28.371 ************************************ 00:13:28.371 END TEST nvmf_invalid 00:13:28.371 ************************************ 00:13:28.371 05:28:31 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:28.371 05:28:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:28.371 05:28:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:28.371 05:28:31 -- common/autotest_common.sh@10 -- # set +x 00:13:28.371 ************************************ 00:13:28.371 START TEST nvmf_abort 00:13:28.371 ************************************ 00:13:28.371 05:28:31 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:28.371 * Looking for test storage... 00:13:28.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:28.371 05:28:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:28.371 05:28:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:28.371 05:28:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:28.371 05:28:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:28.371 05:28:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:28.371 05:28:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:28.371 05:28:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:28.371 05:28:31 -- scripts/common.sh@335 -- # IFS=.-: 00:13:28.371 05:28:31 -- scripts/common.sh@335 -- # read -ra ver1 00:13:28.371 05:28:31 -- scripts/common.sh@336 -- # IFS=.-: 00:13:28.371 05:28:31 -- scripts/common.sh@336 -- # read -ra ver2 00:13:28.371 05:28:31 -- scripts/common.sh@337 -- # local 'op=<' 00:13:28.371 05:28:31 -- scripts/common.sh@339 -- # ver1_l=2 00:13:28.371 05:28:31 -- scripts/common.sh@340 -- # ver2_l=1 00:13:28.371 05:28:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:28.371 05:28:31 -- scripts/common.sh@343 -- # case "$op" in 00:13:28.371 05:28:31 -- scripts/common.sh@344 -- # : 1 00:13:28.371 05:28:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:28.371 05:28:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:28.371 05:28:31 -- scripts/common.sh@364 -- # decimal 1 00:13:28.371 05:28:31 -- scripts/common.sh@352 -- # local d=1 00:13:28.371 05:28:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:28.371 05:28:31 -- scripts/common.sh@354 -- # echo 1 00:13:28.371 05:28:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:28.371 05:28:31 -- scripts/common.sh@365 -- # decimal 2 00:13:28.371 05:28:31 -- scripts/common.sh@352 -- # local d=2 00:13:28.371 05:28:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:28.371 05:28:31 -- scripts/common.sh@354 -- # echo 2 00:13:28.371 05:28:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:28.371 05:28:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:28.371 05:28:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:28.371 05:28:31 -- scripts/common.sh@367 -- # return 0 00:13:28.371 05:28:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:28.371 05:28:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:28.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.371 --rc genhtml_branch_coverage=1 00:13:28.371 --rc genhtml_function_coverage=1 00:13:28.371 --rc genhtml_legend=1 00:13:28.371 --rc geninfo_all_blocks=1 00:13:28.371 --rc geninfo_unexecuted_blocks=1 00:13:28.371 00:13:28.371 ' 00:13:28.371 05:28:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:28.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.371 --rc genhtml_branch_coverage=1 00:13:28.371 --rc genhtml_function_coverage=1 00:13:28.371 --rc genhtml_legend=1 00:13:28.371 --rc geninfo_all_blocks=1 00:13:28.371 --rc geninfo_unexecuted_blocks=1 00:13:28.371 00:13:28.371 ' 00:13:28.371 05:28:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:28.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.371 --rc genhtml_branch_coverage=1 00:13:28.371 --rc genhtml_function_coverage=1 00:13:28.371 --rc genhtml_legend=1 00:13:28.371 --rc geninfo_all_blocks=1 00:13:28.371 --rc geninfo_unexecuted_blocks=1 00:13:28.371 00:13:28.371 ' 00:13:28.371 05:28:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:28.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.371 --rc genhtml_branch_coverage=1 00:13:28.371 --rc genhtml_function_coverage=1 00:13:28.371 --rc genhtml_legend=1 00:13:28.371 --rc geninfo_all_blocks=1 00:13:28.371 --rc geninfo_unexecuted_blocks=1 00:13:28.371 00:13:28.371 ' 00:13:28.371 05:28:31 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:28.371 05:28:31 -- nvmf/common.sh@7 -- # uname -s 00:13:28.371 05:28:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:28.371 05:28:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:28.371 05:28:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:28.371 05:28:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:28.371 05:28:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:28.371 05:28:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:28.371 05:28:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:28.371 05:28:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:28.371 05:28:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:28.371 05:28:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:28.371 05:28:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:28.371 05:28:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:28.371 05:28:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:28.371 05:28:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:28.371 05:28:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:28.371 05:28:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:28.371 05:28:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:28.371 05:28:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:28.371 05:28:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:28.371 05:28:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.371 05:28:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.371 05:28:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.371 05:28:31 -- paths/export.sh@5 -- # export PATH 00:13:28.371 05:28:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.371 05:28:31 -- nvmf/common.sh@46 -- # : 0 00:13:28.371 05:28:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:28.371 05:28:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:28.371 05:28:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:28.371 05:28:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:28.371 05:28:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:28.371 05:28:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:28.371 05:28:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:28.371 05:28:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:28.371 05:28:31 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:28.371 05:28:31 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:28.371 05:28:31 -- target/abort.sh@14 -- # nvmftestinit 00:13:28.371 05:28:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:28.371 05:28:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:28.371 05:28:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:28.371 05:28:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:28.371 05:28:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:28.371 05:28:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.371 05:28:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:28.371 05:28:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.371 05:28:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:28.371 05:28:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:28.371 05:28:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:28.371 05:28:31 -- common/autotest_common.sh@10 -- # set +x 00:13:36.522 05:28:38 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:36.522 05:28:38 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:36.522 05:28:38 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:36.522 05:28:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:36.522 05:28:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:36.522 05:28:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:36.522 05:28:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:36.522 05:28:38 -- nvmf/common.sh@294 -- # net_devs=() 00:13:36.522 05:28:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:36.522 05:28:38 -- nvmf/common.sh@295 -- # e810=() 00:13:36.522 05:28:38 -- nvmf/common.sh@295 -- # local -ga e810 00:13:36.522 05:28:38 -- nvmf/common.sh@296 -- # x722=() 00:13:36.522 05:28:38 -- nvmf/common.sh@296 -- # local -ga x722 00:13:36.522 05:28:38 -- nvmf/common.sh@297 -- # mlx=() 00:13:36.522 05:28:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:36.522 05:28:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:36.522 05:28:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:36.522 05:28:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:36.522 05:28:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:36.522 05:28:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:36.522 05:28:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:36.522 05:28:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:36.522 05:28:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:36.522 05:28:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:36.522 05:28:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:36.522 05:28:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:36.522 05:28:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:36.522 05:28:38 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:36.522 05:28:38 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:36.522 05:28:38 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:36.522 05:28:38 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:36.522 05:28:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:36.522 05:28:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:36.522 05:28:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:36.522 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:36.522 05:28:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:36.522 05:28:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:36.522 05:28:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:36.522 05:28:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:36.522 05:28:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:36.522 05:28:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:36.522 05:28:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:36.522 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:36.522 05:28:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:36.522 05:28:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:36.522 05:28:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:36.522 05:28:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:36.522 05:28:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:36.522 05:28:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:36.522 05:28:38 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:36.522 05:28:38 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:36.522 05:28:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:36.522 05:28:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.522 05:28:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:36.522 05:28:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.522 05:28:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:36.522 Found net devices under 0000:31:00.0: cvl_0_0 00:13:36.522 05:28:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.522 05:28:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:36.522 05:28:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.522 05:28:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:36.523 05:28:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.523 05:28:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:36.523 Found net devices under 0000:31:00.1: cvl_0_1 00:13:36.523 05:28:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.523 05:28:38 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:36.523 05:28:38 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:36.523 05:28:38 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:36.523 05:28:38 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:36.523 05:28:38 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:36.523 05:28:38 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:36.523 05:28:38 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:36.523 05:28:38 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:36.523 05:28:38 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:36.523 05:28:38 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:36.523 05:28:38 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:36.523 05:28:38 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:36.523 05:28:38 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:36.523 05:28:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:36.523 05:28:38 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:36.523 05:28:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:36.523 05:28:38 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:36.523 05:28:38 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:36.523 05:28:38 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:36.523 05:28:38 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:36.523 05:28:38 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:36.523 05:28:38 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:36.523 05:28:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:36.523 05:28:38 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:36.523 05:28:38 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:36.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:36.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:13:36.523 00:13:36.523 --- 10.0.0.2 ping statistics --- 00:13:36.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.523 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:13:36.523 05:28:38 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:36.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:36.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:13:36.523 00:13:36.523 --- 10.0.0.1 ping statistics --- 00:13:36.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.523 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:13:36.523 05:28:38 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:36.523 05:28:38 -- nvmf/common.sh@410 -- # return 0 00:13:36.523 05:28:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:36.523 05:28:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:36.523 05:28:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:36.523 05:28:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:36.523 05:28:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:36.523 05:28:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:36.523 05:28:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:36.523 05:28:38 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:36.523 05:28:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:36.523 05:28:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:36.523 05:28:38 -- common/autotest_common.sh@10 -- # set +x 00:13:36.523 05:28:38 -- nvmf/common.sh@469 -- # nvmfpid=1732194 00:13:36.523 05:28:38 -- nvmf/common.sh@470 -- # waitforlisten 1732194 00:13:36.523 05:28:38 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:36.523 05:28:38 -- common/autotest_common.sh@829 -- # '[' -z 1732194 ']' 00:13:36.523 05:28:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.523 05:28:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:36.523 05:28:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.523 05:28:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:36.523 05:28:38 -- common/autotest_common.sh@10 -- # set +x 00:13:36.523 [2024-12-07 05:28:38.880707] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:36.523 [2024-12-07 05:28:38.880759] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:36.523 EAL: No free 2048 kB hugepages reported on node 1 00:13:36.523 [2024-12-07 05:28:38.970210] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:36.523 [2024-12-07 05:28:39.061478] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:36.523 [2024-12-07 05:28:39.061651] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:36.523 [2024-12-07 05:28:39.061664] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:36.523 [2024-12-07 05:28:39.061672] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:36.523 [2024-12-07 05:28:39.061863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:36.523 [2024-12-07 05:28:39.062054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:36.523 [2024-12-07 05:28:39.062115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.523 05:28:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:36.523 05:28:39 -- common/autotest_common.sh@862 -- # return 0 00:13:36.523 05:28:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:36.523 05:28:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:36.523 05:28:39 -- common/autotest_common.sh@10 -- # set +x 00:13:36.523 05:28:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:36.523 05:28:39 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:36.523 05:28:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.523 05:28:39 -- common/autotest_common.sh@10 -- # set +x 00:13:36.523 [2024-12-07 05:28:39.712181] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:36.523 05:28:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.523 05:28:39 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:36.523 05:28:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.523 05:28:39 -- common/autotest_common.sh@10 -- # set +x 00:13:36.523 Malloc0 00:13:36.782 05:28:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.782 05:28:39 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:36.782 05:28:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.782 05:28:39 -- common/autotest_common.sh@10 -- # set +x 00:13:36.782 Delay0 00:13:36.782 05:28:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.782 05:28:39 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:36.782 05:28:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.783 05:28:39 -- common/autotest_common.sh@10 -- # set +x 00:13:36.783 05:28:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.783 05:28:39 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:36.783 05:28:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.783 05:28:39 -- common/autotest_common.sh@10 -- # set +x 00:13:36.783 05:28:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.783 05:28:39 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:36.783 05:28:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.783 05:28:39 -- common/autotest_common.sh@10 -- # set +x 00:13:36.783 [2024-12-07 05:28:39.803869] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.783 05:28:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.783 05:28:39 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:36.783 05:28:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.783 05:28:39 -- common/autotest_common.sh@10 -- # set +x 00:13:36.783 05:28:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.783 05:28:39 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:36.783 EAL: No free 2048 kB hugepages reported on node 1 00:13:36.783 [2024-12-07 05:28:39.912430] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:39.326 Initializing NVMe Controllers 00:13:39.326 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:39.326 controller IO queue size 128 less than required 00:13:39.326 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:39.326 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:39.326 Initialization complete. Launching workers. 00:13:39.326 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 34663 00:13:39.326 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34724, failed to submit 62 00:13:39.326 success 34663, unsuccess 61, failed 0 00:13:39.326 05:28:42 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:39.326 05:28:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.326 05:28:42 -- common/autotest_common.sh@10 -- # set +x 00:13:39.326 05:28:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.326 05:28:42 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:39.326 05:28:42 -- target/abort.sh@38 -- # nvmftestfini 00:13:39.326 05:28:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:39.326 05:28:42 -- nvmf/common.sh@116 -- # sync 00:13:39.326 05:28:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:39.326 05:28:42 -- nvmf/common.sh@119 -- # set +e 00:13:39.326 05:28:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:39.326 05:28:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:39.326 rmmod nvme_tcp 00:13:39.326 rmmod nvme_fabrics 00:13:39.326 rmmod nvme_keyring 00:13:39.326 05:28:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:39.326 05:28:42 -- nvmf/common.sh@123 -- # set -e 00:13:39.326 05:28:42 -- nvmf/common.sh@124 -- # return 0 00:13:39.326 05:28:42 -- nvmf/common.sh@477 -- # '[' -n 1732194 ']' 00:13:39.326 05:28:42 -- nvmf/common.sh@478 -- # killprocess 1732194 00:13:39.326 05:28:42 -- common/autotest_common.sh@936 -- # '[' -z 1732194 ']' 00:13:39.326 05:28:42 -- common/autotest_common.sh@940 -- # kill -0 1732194 00:13:39.326 05:28:42 -- common/autotest_common.sh@941 -- # uname 00:13:39.326 05:28:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:39.326 05:28:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1732194 00:13:39.326 05:28:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:39.326 05:28:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:39.326 05:28:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1732194' 00:13:39.326 killing process with pid 1732194 00:13:39.326 05:28:42 -- common/autotest_common.sh@955 -- # kill 1732194 00:13:39.327 05:28:42 -- common/autotest_common.sh@960 -- # wait 1732194 00:13:39.327 05:28:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:39.327 05:28:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:39.327 05:28:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:39.327 05:28:42 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:39.327 05:28:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:39.327 05:28:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.327 05:28:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:39.327 05:28:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.239 05:28:44 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:41.239 00:13:41.239 real 0m13.386s 00:13:41.239 user 0m13.894s 00:13:41.239 sys 0m6.544s 00:13:41.239 05:28:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:41.239 05:28:44 -- common/autotest_common.sh@10 -- # set +x 00:13:41.239 ************************************ 00:13:41.239 END TEST nvmf_abort 00:13:41.239 ************************************ 00:13:41.501 05:28:44 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:41.501 05:28:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:41.501 05:28:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:41.501 05:28:44 -- common/autotest_common.sh@10 -- # set +x 00:13:41.501 ************************************ 00:13:41.501 START TEST nvmf_ns_hotplug_stress 00:13:41.501 ************************************ 00:13:41.501 05:28:44 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:41.501 * Looking for test storage... 00:13:41.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:41.501 05:28:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:41.501 05:28:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:41.501 05:28:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:41.501 05:28:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:41.501 05:28:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:41.501 05:28:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:41.501 05:28:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:41.501 05:28:44 -- scripts/common.sh@335 -- # IFS=.-: 00:13:41.501 05:28:44 -- scripts/common.sh@335 -- # read -ra ver1 00:13:41.501 05:28:44 -- scripts/common.sh@336 -- # IFS=.-: 00:13:41.501 05:28:44 -- scripts/common.sh@336 -- # read -ra ver2 00:13:41.501 05:28:44 -- scripts/common.sh@337 -- # local 'op=<' 00:13:41.501 05:28:44 -- scripts/common.sh@339 -- # ver1_l=2 00:13:41.501 05:28:44 -- scripts/common.sh@340 -- # ver2_l=1 00:13:41.501 05:28:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:41.501 05:28:44 -- scripts/common.sh@343 -- # case "$op" in 00:13:41.501 05:28:44 -- scripts/common.sh@344 -- # : 1 00:13:41.501 05:28:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:41.501 05:28:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:41.501 05:28:44 -- scripts/common.sh@364 -- # decimal 1 00:13:41.501 05:28:44 -- scripts/common.sh@352 -- # local d=1 00:13:41.501 05:28:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:41.501 05:28:44 -- scripts/common.sh@354 -- # echo 1 00:13:41.501 05:28:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:41.501 05:28:44 -- scripts/common.sh@365 -- # decimal 2 00:13:41.501 05:28:44 -- scripts/common.sh@352 -- # local d=2 00:13:41.501 05:28:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:41.501 05:28:44 -- scripts/common.sh@354 -- # echo 2 00:13:41.501 05:28:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:41.501 05:28:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:41.501 05:28:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:41.501 05:28:44 -- scripts/common.sh@367 -- # return 0 00:13:41.501 05:28:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:41.501 05:28:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:41.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.501 --rc genhtml_branch_coverage=1 00:13:41.501 --rc genhtml_function_coverage=1 00:13:41.501 --rc genhtml_legend=1 00:13:41.501 --rc geninfo_all_blocks=1 00:13:41.501 --rc geninfo_unexecuted_blocks=1 00:13:41.501 00:13:41.501 ' 00:13:41.501 05:28:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:41.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.501 --rc genhtml_branch_coverage=1 00:13:41.501 --rc genhtml_function_coverage=1 00:13:41.501 --rc genhtml_legend=1 00:13:41.501 --rc geninfo_all_blocks=1 00:13:41.501 --rc geninfo_unexecuted_blocks=1 00:13:41.501 00:13:41.501 ' 00:13:41.501 05:28:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:41.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.501 --rc genhtml_branch_coverage=1 00:13:41.501 --rc genhtml_function_coverage=1 00:13:41.501 --rc genhtml_legend=1 00:13:41.501 --rc geninfo_all_blocks=1 00:13:41.501 --rc geninfo_unexecuted_blocks=1 00:13:41.501 00:13:41.501 ' 00:13:41.501 05:28:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:41.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.501 --rc genhtml_branch_coverage=1 00:13:41.501 --rc genhtml_function_coverage=1 00:13:41.501 --rc genhtml_legend=1 00:13:41.501 --rc geninfo_all_blocks=1 00:13:41.501 --rc geninfo_unexecuted_blocks=1 00:13:41.501 00:13:41.501 ' 00:13:41.501 05:28:44 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:41.501 05:28:44 -- nvmf/common.sh@7 -- # uname -s 00:13:41.501 05:28:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:41.501 05:28:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:41.501 05:28:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:41.501 05:28:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:41.501 05:28:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:41.501 05:28:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:41.501 05:28:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:41.501 05:28:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:41.501 05:28:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:41.501 05:28:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:41.501 05:28:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:41.501 05:28:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:41.501 05:28:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:41.501 05:28:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:41.501 05:28:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:41.501 05:28:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:41.501 05:28:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:41.501 05:28:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:41.501 05:28:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:41.501 05:28:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.501 05:28:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.501 05:28:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.501 05:28:44 -- paths/export.sh@5 -- # export PATH 00:13:41.501 05:28:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.501 05:28:44 -- nvmf/common.sh@46 -- # : 0 00:13:41.501 05:28:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:41.501 05:28:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:41.501 05:28:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:41.501 05:28:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:41.501 05:28:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:41.501 05:28:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:41.501 05:28:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:41.501 05:28:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:41.501 05:28:44 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:41.501 05:28:44 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:41.501 05:28:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:41.501 05:28:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:41.501 05:28:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:41.501 05:28:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:41.501 05:28:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:41.502 05:28:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.502 05:28:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:41.502 05:28:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.502 05:28:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:41.502 05:28:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:41.502 05:28:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:41.502 05:28:44 -- common/autotest_common.sh@10 -- # set +x 00:13:49.639 05:28:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:49.639 05:28:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:49.639 05:28:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:49.639 05:28:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:49.639 05:28:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:49.639 05:28:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:49.639 05:28:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:49.639 05:28:51 -- nvmf/common.sh@294 -- # net_devs=() 00:13:49.639 05:28:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:49.639 05:28:51 -- nvmf/common.sh@295 -- # e810=() 00:13:49.639 05:28:51 -- nvmf/common.sh@295 -- # local -ga e810 00:13:49.639 05:28:51 -- nvmf/common.sh@296 -- # x722=() 00:13:49.639 05:28:51 -- nvmf/common.sh@296 -- # local -ga x722 00:13:49.639 05:28:51 -- nvmf/common.sh@297 -- # mlx=() 00:13:49.639 05:28:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:49.639 05:28:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:49.639 05:28:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:49.639 05:28:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:49.639 05:28:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:49.639 05:28:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:49.639 05:28:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:49.639 05:28:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:49.639 05:28:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:49.639 05:28:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:49.639 05:28:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:49.639 05:28:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:49.639 05:28:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:49.639 05:28:51 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:49.639 05:28:51 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:49.639 05:28:51 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:49.639 05:28:51 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:49.639 05:28:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:49.639 05:28:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:49.639 05:28:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:49.639 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:49.639 05:28:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:49.639 05:28:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:49.639 05:28:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:49.639 05:28:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:49.639 05:28:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:49.639 05:28:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:49.639 05:28:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:49.639 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:49.639 05:28:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:49.639 05:28:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:49.639 05:28:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:49.639 05:28:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:49.639 05:28:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:49.639 05:28:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:49.639 05:28:51 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:49.639 05:28:51 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:49.639 05:28:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:49.639 05:28:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:49.639 05:28:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:49.639 05:28:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:49.639 05:28:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:49.639 Found net devices under 0000:31:00.0: cvl_0_0 00:13:49.639 05:28:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:49.639 05:28:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:49.639 05:28:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:49.639 05:28:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:49.639 05:28:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:49.639 05:28:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:49.639 Found net devices under 0000:31:00.1: cvl_0_1 00:13:49.639 05:28:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:49.639 05:28:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:49.639 05:28:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:49.639 05:28:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:49.639 05:28:51 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:49.639 05:28:51 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:49.639 05:28:51 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:49.639 05:28:51 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:49.639 05:28:51 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:49.639 05:28:51 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:49.639 05:28:51 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:49.639 05:28:51 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:49.639 05:28:51 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:49.639 05:28:51 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:49.639 05:28:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:49.639 05:28:51 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:49.639 05:28:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:49.639 05:28:51 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:49.639 05:28:51 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:49.639 05:28:51 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:49.639 05:28:51 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:49.639 05:28:51 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:49.639 05:28:51 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:49.639 05:28:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:49.639 05:28:52 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:49.639 05:28:52 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:49.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:49.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:13:49.639 00:13:49.639 --- 10.0.0.2 ping statistics --- 00:13:49.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.639 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:13:49.639 05:28:52 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:49.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:49.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:13:49.639 00:13:49.639 --- 10.0.0.1 ping statistics --- 00:13:49.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.639 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:13:49.639 05:28:52 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:49.639 05:28:52 -- nvmf/common.sh@410 -- # return 0 00:13:49.639 05:28:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:49.639 05:28:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:49.639 05:28:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:49.639 05:28:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:49.639 05:28:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:49.639 05:28:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:49.639 05:28:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:49.639 05:28:52 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:49.639 05:28:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:49.639 05:28:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:49.639 05:28:52 -- common/autotest_common.sh@10 -- # set +x 00:13:49.639 05:28:52 -- nvmf/common.sh@469 -- # nvmfpid=1737011 00:13:49.639 05:28:52 -- nvmf/common.sh@470 -- # waitforlisten 1737011 00:13:49.639 05:28:52 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:49.639 05:28:52 -- common/autotest_common.sh@829 -- # '[' -z 1737011 ']' 00:13:49.639 05:28:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.639 05:28:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:49.639 05:28:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.639 05:28:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:49.639 05:28:52 -- common/autotest_common.sh@10 -- # set +x 00:13:49.639 [2024-12-07 05:28:52.214548] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:49.640 [2024-12-07 05:28:52.214613] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.640 EAL: No free 2048 kB hugepages reported on node 1 00:13:49.640 [2024-12-07 05:28:52.307134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:49.640 [2024-12-07 05:28:52.401735] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:49.640 [2024-12-07 05:28:52.401910] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.640 [2024-12-07 05:28:52.401922] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.640 [2024-12-07 05:28:52.401930] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.640 [2024-12-07 05:28:52.402082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:49.640 [2024-12-07 05:28:52.402267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:49.640 [2024-12-07 05:28:52.402366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.900 05:28:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:49.900 05:28:53 -- common/autotest_common.sh@862 -- # return 0 00:13:49.900 05:28:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:49.900 05:28:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:49.900 05:28:53 -- common/autotest_common.sh@10 -- # set +x 00:13:49.900 05:28:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.900 05:28:53 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:49.900 05:28:53 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:50.161 [2024-12-07 05:28:53.197104] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:50.161 05:28:53 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:50.422 05:28:53 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:50.422 [2024-12-07 05:28:53.542622] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.422 05:28:53 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:50.682 05:28:53 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:50.682 Malloc0 00:13:50.943 05:28:53 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:50.943 Delay0 00:13:50.944 05:28:54 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.206 05:28:54 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:51.206 NULL1 00:13:51.206 05:28:54 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:51.467 05:28:54 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:51.467 05:28:54 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1737679 00:13:51.467 05:28:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:13:51.467 05:28:54 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.467 EAL: No free 2048 kB hugepages reported on node 1 00:13:51.728 05:28:54 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.992 05:28:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:51.992 05:28:54 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:51.992 true 00:13:51.992 05:28:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:13:51.992 05:28:55 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.253 05:28:55 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.253 05:28:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:52.253 05:28:55 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:52.513 true 00:13:52.513 05:28:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:13:52.513 05:28:55 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.773 05:28:55 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.773 05:28:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:52.773 05:28:56 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:53.033 true 00:13:53.034 05:28:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:13:53.034 05:28:56 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.296 05:28:56 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.296 05:28:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:53.296 05:28:56 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:53.557 true 00:13:53.557 05:28:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:13:53.557 05:28:56 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.817 05:28:56 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.817 05:28:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:53.817 05:28:57 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:54.079 true 00:13:54.079 05:28:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:13:54.079 05:28:57 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.340 05:28:57 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.340 05:28:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:54.340 05:28:57 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:54.602 true 00:13:54.602 05:28:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:13:54.602 05:28:57 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.862 05:28:57 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.862 05:28:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:54.862 05:28:58 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:55.123 true 00:13:55.123 05:28:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:13:55.123 05:28:58 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.384 05:28:58 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.384 05:28:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:55.384 05:28:58 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:55.644 true 00:13:55.644 05:28:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:13:55.644 05:28:58 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.906 05:28:58 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.906 05:28:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:55.906 05:28:59 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:56.167 true 00:13:56.167 05:28:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:13:56.167 05:28:59 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.429 05:28:59 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.429 05:28:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:56.429 05:28:59 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:56.689 true 00:13:56.689 05:28:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:13:56.689 05:28:59 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.949 05:28:59 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.949 05:29:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:56.949 05:29:00 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:57.209 true 00:13:57.209 05:29:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:13:57.209 05:29:00 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.470 05:29:00 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.470 05:29:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:57.470 05:29:00 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:57.733 true 00:13:57.733 05:29:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:13:57.733 05:29:00 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.733 05:29:00 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.994 05:29:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:57.994 05:29:01 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:58.254 true 00:13:58.254 05:29:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:13:58.254 05:29:01 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.254 05:29:01 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.514 05:29:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:58.514 05:29:01 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:58.774 true 00:13:58.774 05:29:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:13:58.774 05:29:01 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.774 05:29:01 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.038 05:29:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:59.038 05:29:02 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:59.340 true 00:13:59.340 05:29:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:13:59.340 05:29:02 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.340 05:29:02 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.673 05:29:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:59.673 05:29:02 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:59.673 true 00:13:59.673 05:29:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:13:59.673 05:29:02 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.933 05:29:02 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.933 05:29:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:59.933 05:29:03 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:14:00.192 true 00:14:00.192 05:29:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:00.192 05:29:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.453 05:29:03 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.453 05:29:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:14:00.453 05:29:03 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:14:00.713 true 00:14:00.713 05:29:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:00.713 05:29:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.975 05:29:03 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.975 05:29:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:14:00.975 05:29:04 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:14:01.236 true 00:14:01.236 05:29:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:01.236 05:29:04 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.496 05:29:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.496 05:29:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:14:01.496 05:29:04 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:14:01.755 true 00:14:01.755 05:29:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:01.755 05:29:04 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.020 05:29:05 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:02.020 05:29:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:14:02.020 05:29:05 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:02.281 true 00:14:02.281 05:29:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:02.281 05:29:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.281 05:29:05 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:02.541 05:29:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:14:02.541 05:29:05 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:02.801 true 00:14:02.801 05:29:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:02.801 05:29:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.801 05:29:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.061 05:29:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:14:03.061 05:29:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:03.322 true 00:14:03.322 05:29:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:03.322 05:29:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.322 05:29:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.582 05:29:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:14:03.583 05:29:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:03.843 true 00:14:03.843 05:29:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:03.843 05:29:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.843 05:29:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.103 05:29:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:14:04.103 05:29:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:04.364 true 00:14:04.364 05:29:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:04.364 05:29:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.364 05:29:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.624 05:29:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:14:04.624 05:29:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:04.624 true 00:14:04.884 05:29:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:04.885 05:29:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.885 05:29:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.144 05:29:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:05.144 05:29:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:05.144 true 00:14:05.405 05:29:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:05.405 05:29:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.405 05:29:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.666 05:29:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:05.666 05:29:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:05.666 true 00:14:05.926 05:29:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:05.926 05:29:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.926 05:29:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.187 05:29:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:14:06.187 05:29:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:14:06.187 true 00:14:06.187 05:29:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:06.187 05:29:09 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.449 05:29:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.709 05:29:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:14:06.709 05:29:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:14:06.709 true 00:14:06.709 05:29:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:06.709 05:29:09 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.968 05:29:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.227 05:29:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:14:07.228 05:29:10 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:14:07.228 true 00:14:07.228 05:29:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:07.228 05:29:10 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.170 05:29:11 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.170 Read completed with error (sct=0, sc=11) 00:14:08.431 05:29:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:14:08.431 05:29:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:14:08.431 true 00:14:08.692 05:29:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:08.692 05:29:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.692 05:29:11 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.952 05:29:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:14:08.952 05:29:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:14:08.952 true 00:14:08.952 05:29:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:08.952 05:29:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.213 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:09.213 05:29:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:09.213 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:09.213 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:09.213 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:09.501 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:09.501 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:09.501 [2024-12-07 05:29:12.497538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.501 [2024-12-07 05:29:12.497598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.501 [2024-12-07 05:29:12.497630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.501 [2024-12-07 05:29:12.497658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.501 [2024-12-07 05:29:12.497688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.501 [2024-12-07 05:29:12.497727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.501 [2024-12-07 05:29:12.497754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.501 [2024-12-07 05:29:12.497790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.501 [2024-12-07 05:29:12.497816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.501 [2024-12-07 05:29:12.497849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.501 [2024-12-07 05:29:12.497877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.501 [2024-12-07 05:29:12.497905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.501 [2024-12-07 05:29:12.497942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.501 [2024-12-07 05:29:12.497973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.501 [2024-12-07 05:29:12.498018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.501 [2024-12-07 05:29:12.498048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.501 [2024-12-07 05:29:12.498082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.501 [2024-12-07 05:29:12.498111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.501 [2024-12-07 05:29:12.498143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.501 [2024-12-07 05:29:12.498175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.498210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.498238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.498266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.498293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.498318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.498351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.498378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.498406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.498428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.498453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.498484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.498515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.498538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.498563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.498589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.498616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.498650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.498689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.498718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.498750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.498780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.498808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.498862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.498893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.498922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.498952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.498980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.499016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.499051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.499083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.499114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.499145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.499176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.499206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.499240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.499271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.499309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.499338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.499368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.499401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.499440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.499468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.499498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.499602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.499635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.499664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.499696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.499724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.499759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.499795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.499829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.499863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.499892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.499924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.499954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.499977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.500005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.500045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.500074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.500104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.500134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.500167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.500197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.500223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.500252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.500279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.500302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.500328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.500359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.500388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.500411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.500435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.500464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.500494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.500520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.500543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.500565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.500588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.500611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.502 [2024-12-07 05:29:12.500634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.500657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.500680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.500703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.500725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.500749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.500772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.500796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.500819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.500842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.500865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.500888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.500918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.500945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.500975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.501002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.501030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.501054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.501077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.501100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.501122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.501147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.501169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.501193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.501215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.501239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.501262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.501285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.501491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.501515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.501546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.501574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.501606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.501638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.501671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.501701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.501740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.501767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.501799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.501827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.501860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.501892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.501920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.501953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.501981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.502016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.502045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.502074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.502108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.502138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.502169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.502200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.502235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.502264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.502300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.502332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.502363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.502391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.502422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.502453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.502482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.502513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.502548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.502575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.502601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.502630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.502653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.502678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.502732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.502763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.502793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.502823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.502853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.502886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.502916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.502945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.502976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.503 [2024-12-07 05:29:12.503016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.503044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.503072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.503101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.503141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.503169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.503199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.503231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.503472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.503502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.503536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.503565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.503601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.503634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.503685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.503711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.503773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.503803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.503835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.503864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.503892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.503924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.503954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.503986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.504020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.504050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.504080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.504113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.504141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.504169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.504201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.504229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.504266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.504294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.504319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.504350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.504377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.504400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.504435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.504464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.504495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.504524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.504556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.504582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.504613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.504648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.504681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.504710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.504733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.504755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.504779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.504802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.504825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.504848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.504877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.504910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.504939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.504972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.505003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.505038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.505069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.505096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.505127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.505156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.505189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.505221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.505257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.505288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.505334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.505362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.505397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.505426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.505456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.505488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.505520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.505549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.505572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.505603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.505839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.504 [2024-12-07 05:29:12.505867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.505891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.505913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.505936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.505965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.505994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.506980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.507008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.507041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.507066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.507092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.507120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.507161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.507192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.507221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.507258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.507289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.507349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.507378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.507407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.507438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.507666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.507691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.507714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.507736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.507759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.507783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.507805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.507828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.507853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.507875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.507898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.507922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.507945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.507968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.507991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.508018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.508046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.508069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.508092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.505 [2024-12-07 05:29:12.508116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.508139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.508162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.508184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.508207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.508230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.508260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.508293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.508325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.508351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.508386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.508415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.508447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.508480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.508509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.508539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.508569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.508601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.508632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.508662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.508696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.508723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.508754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.508782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.508813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.508843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.508874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.508928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.508957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.508983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.509017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.509052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.509084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.509114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.509144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.509174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.509204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.509235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.509264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.509294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.509324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.509351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.509383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.509411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.509454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.509708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.509741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.509775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.509805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.509837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.509870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.509898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.509927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.509958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.509981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.510004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.510038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.510072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.510100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.510126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.510154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.510184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.510218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.510244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.510269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.510298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.510321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.510345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.510368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.510398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.510427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.510454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.510512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.510543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.510574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.510607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.510638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.510670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.510707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.510737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.506 [2024-12-07 05:29:12.510769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.510802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.510833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.510869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.510899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.510929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.510959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.510990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.511027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.511056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.511088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.511121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.511158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.511189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.511218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.511256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.511286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.511311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.511339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.511366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.511400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.511432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.511472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.511501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.511530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.511775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.511808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.511842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.511874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.511901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.511928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.511950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.511973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.512003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.512039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.512072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.512103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.512133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.512158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.512194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.512226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.512259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.512285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.512307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.512335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.512369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.512395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.512426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.512455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.512478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.512500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.512524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.512553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.512585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.512609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.512635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.512659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.512683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.512705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.512732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.512755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.512779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.512802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.512824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.512847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.512873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.507 [2024-12-07 05:29:12.512895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.512921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.512947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.512979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.513007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.513039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.513068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.513098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.513134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.513157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.513180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.513203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.513225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.513247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.513270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.513293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.513315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.513338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.513364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.513387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.513410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.513433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.513455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.513525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.513548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.513574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.513791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.513828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.513860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.513889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.513924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.513954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.513998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.514031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.514069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.514097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.514124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.514152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.514183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.514216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.514246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.514273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.514307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.514336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.514369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.514399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.514434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.514465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.514499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.514524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.514549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.514573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.514595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.514621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.514654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.514697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.514728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.514758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.514789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.514824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.514851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.514878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.514911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.514939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.514990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.515023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.515057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.515085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.515117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.515149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.515180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.515214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.515243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.515271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.515299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.515335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.515365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.515392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.515420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.515449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.515482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.515514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.508 [2024-12-07 05:29:12.515543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.515579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.515614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.515645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.515898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.515928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.515960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.515992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.516026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.516063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.516093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.516125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.516149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.516178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.516204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.516241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.516267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.516295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.516318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.516344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.516373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.516402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.516429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.516456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.516491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.516521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.516549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.516581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.516611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.516637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.516661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.516687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.516711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.516736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.516760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.516786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.516815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.516854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.516885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.516921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.516949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.516980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.517013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.517041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.517072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.517106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.517139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.517170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.517199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.517231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.517265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.517316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.517343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.517381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.517408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.517439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.517471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.517497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.517520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.517545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.517579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.517615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.517642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.517667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.517695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.517723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.517757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.517783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.518026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.518052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.518076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.518098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.518121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.518143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.518169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.518199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.518226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.518259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.518286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.518311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.518335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.518358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.518381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.509 [2024-12-07 05:29:12.518404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.518427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.518450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.518473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.518496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.518519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.518542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.518564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.518587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.518610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.518632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.518655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.518677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.518705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.518735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.518765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.518794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.518823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.518848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.518871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.518894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.518917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.518939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.518962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.518986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.519009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.519035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.519058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.519083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.519109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.519139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.519178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.519204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.519236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.519268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.519297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.519331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.519362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.519397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.519429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.519458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.519484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.519514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.519542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.519574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.519607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.519641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.519672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.519940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.519969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.520005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.520039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.520075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.520102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.520130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.520157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.520180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.520206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.520229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.520253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.520276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.520299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.520321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.520350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.520383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.520413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.520443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.520470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.520505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.520533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.520563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.520598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.520629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.520687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.520715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.520755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.520782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.520811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.520846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.520878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.520905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.520939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.520998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.521028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.510 [2024-12-07 05:29:12.521074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.521104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.521132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.521168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.521199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.521232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.521263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.521290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.521317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.521343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.521372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.521402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.521439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.521463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.521492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.521525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.521560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.521584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.521610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.521641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.521679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.521702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.521729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.521763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.521794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.521825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.521881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.521912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.522162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.522225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.522253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.522282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.522312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.522345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.522376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.522409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.522442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.522472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.522504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.522534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.522564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.522594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.522629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.522657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.522688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.522720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.522756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.522783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.522818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.522844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.522877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.522909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.522938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.522967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.522995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.523031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.523065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.523094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.523122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.523150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.523179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.523202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.523226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.523256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.523282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.523313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.523346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.523382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.523411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.523437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.523468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.523494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.523516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.523540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.523562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.523589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.523616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.523640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.523692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.523718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.511 [2024-12-07 05:29:12.523746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.523777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.523805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.523833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.523865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.523891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.523944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.523973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.524007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.524050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.524081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.524333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.524368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.524394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.524418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.524441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.524467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.524503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.524529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.524555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.524582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.524614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.524648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.524672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.524698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.524729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.524754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.524776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.524799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.524825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.524851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.524879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.524906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.524936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.524965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.525000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.525027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.525050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.525072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.525096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.525121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.525144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.525167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.525190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.525213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.525235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.525258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.525281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.525303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.525326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.525349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.525372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.525401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.525431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.525457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.525479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 05:29:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:14:09.512 [2024-12-07 05:29:12.525502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.525544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.525567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.525590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.525613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.525635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.525658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.525681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.525704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.512 [2024-12-07 05:29:12.525727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.525750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.525773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.525796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.525820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.525843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 05:29:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:14:09.513 [2024-12-07 05:29:12.525866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.525889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.525913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.525938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.526147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.526172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.526194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.526217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.526239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.526267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.526294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.526321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.526349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.526382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.526422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.526457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.526512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.526545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.526580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.526613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.526642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.526706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.526735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.526762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.526793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.526822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.526877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.526906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.526937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.526965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.526993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.527029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.527060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.527091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.527120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.527148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.527175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.527202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.527231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.527262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.527289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.527321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.527355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.527386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.527417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.527447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.527481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.527512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.527542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.527570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.527602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:09.513 [2024-12-07 05:29:12.527632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.527667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.527696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.527728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.527756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.527791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.527827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.527853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.527885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.527919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.527949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.527985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.528019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.528052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.528082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.528105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.528368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.528393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.528418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.528449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.528481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.528512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.528542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.528571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.528601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.513 [2024-12-07 05:29:12.528632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.528664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.528701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.528727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.528765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.528795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.528823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.528860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.528892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.528926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.528959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.528993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.529025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.529059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.529092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.529122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.529156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.529186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.529250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.529279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.529313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.529343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.529370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.529412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.529438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.529469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.529501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.529530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.529589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.529618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.529647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.529682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.529708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.529770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.529797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.529823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.529850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.529879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.529907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.529936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.529970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.529997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.530028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.530062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.530087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.530118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.530140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.530167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.530195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.530227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.530263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.530299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.530327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.530361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.530395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.530468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.530492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.530529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.530557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.530580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.530603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.530627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.530650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.530673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.530696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.530718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.530741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.530764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.530786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.530808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.530834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.530862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.531249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.531275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.531298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.531322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.531344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.531368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.531390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.531413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.531436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.531458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.531481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.514 [2024-12-07 05:29:12.531504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.531527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.531549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.531572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.531595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.531618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.531640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.531662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.531685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.531707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.531730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.531752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.531775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.531798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.531820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.531843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.531866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.531888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.531910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.531933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.531955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.531981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.532004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.532035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.532060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.532083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.532106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.532128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.532151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.532174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.532196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.532219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.532241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.532264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.532288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.532406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.532437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.532466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.532491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.532520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.532547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.532574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.532612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.532642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.532673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.532700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.532729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.532765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.532799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.532832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.532861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.532890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.532920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.532949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.532980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.533008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.533046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.533075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.533099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.533122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.533146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.533168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.533191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.533214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.533237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.533261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.533283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.533307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.533329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.533352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.533374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.533397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.533420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.533450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.533480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.533513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.533543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.533574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.533603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.533635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.533670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.515 [2024-12-07 05:29:12.533699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.533729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.533757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.533787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.533818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.533845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.533873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.533903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.533940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.533972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.534003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.534034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.534064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.534098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.534123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.534157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.534192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.534230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.534312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.534344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.534372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.534403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.534452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.534480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.534508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.534539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.534569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.534600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.534631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.534657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.534694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.534723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.534771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.534800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.534831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.535127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.535156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.535187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.535220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.535249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.535278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.535312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.535339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.535367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.535389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.535419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.535458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.535486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.535509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.535536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.535558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.535583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.535619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.535647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.535675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.535706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.535740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.535774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.535800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.535832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.535860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.535889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.535922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.535951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.535976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.536026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.536065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.536094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.536126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.536156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.536185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.536220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.536246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.536286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.536318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.536351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.536383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.536408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.536441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.536470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.536501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.536535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.516 [2024-12-07 05:29:12.536570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.536600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.536634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.536671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.536697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.536727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.536757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.536785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.536814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.536836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.536862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.536892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.536924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.536953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.536980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.537006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.537148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.537175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.537208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.537234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.537263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.537292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.537320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.537354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.537383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.537438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.537467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.537497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.537529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.537558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.537588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.537625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.537653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.537677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.537699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.537724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.537755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.537785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.537815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.537838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.537861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.537884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.537914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.537941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.537977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.538003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.538036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.538066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.538100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.538128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.538157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.538191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.538219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.538246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.538275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.538309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.538342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.538365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.538388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.538419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.538447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.538470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.538498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.538526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.538553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.538579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.538607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.538639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.538673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.538702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.538734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.538765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.538792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.538817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.538844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.538873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.538897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.538923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.538946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.538969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.539228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.539253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.539276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.539300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.517 [2024-12-07 05:29:12.539324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.539347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.539378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.539405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.539438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.539471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.539524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.539558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.539590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.539616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.539647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.539705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.539732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.539768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.539799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.539832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.539862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.539894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.539921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.539951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.539982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.540027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.540063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.540096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.540124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.540157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.540187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.540220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.540250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.540278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.540314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.540342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.540370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.540398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.540425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.540452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.540482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.540510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.540541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.540574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.540612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.540641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.540666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.540688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.540718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.540747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.540778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.540803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.540833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.540867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.540894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.540929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.540958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.540987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.541024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.541055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.541302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.541335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.541370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.541415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.541455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.541485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.541518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.541545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.541576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.541609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.541636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.541669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.541697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.518 [2024-12-07 05:29:12.541763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.541795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.541831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.541859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.541889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.541921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.541949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.541982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.542015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.542050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.542077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.542102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.542139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.542167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.542212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.542238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.542267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.542300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.542327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.542356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.542387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.542416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.542454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.542481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.542511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.542533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.542558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.542586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.542617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.542649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.542684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.542716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.542747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.542776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.542803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.542833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.542859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.542881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.542904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.542927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.542951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.542973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.542996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.543030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.543062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.543090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.543122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.543148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.543178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.543206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.543236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.543268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.543300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.543332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.543552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.543588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.543617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.543649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.543678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.543705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.543728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.543751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.543775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.543802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.543831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.543858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.543887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.543916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.543940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.543963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.543986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.544008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.544041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.544064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.544087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.544110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.544133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.544156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.544179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.544201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.544225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.519 [2024-12-07 05:29:12.544248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.544272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.544294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.544318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.544340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.544362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.544385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.544408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.544430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.544453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.544476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.544499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.544523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.544545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.544569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.544596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.544621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.544659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.544690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.544713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.544748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.544777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.544803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.544832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.544885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.544913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.544947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.544977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.545004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.545042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.545069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.545109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.545140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.545171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.545204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.545232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.545471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.545517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.545551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.545618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.545645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.545674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.545704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.545730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.545778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.545806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.545845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.545877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.545904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.545936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.545973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.546014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.546046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.546257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.546284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.546314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.546346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.546379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.546413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.546442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.546465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.546489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.546514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.546543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.546570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.546595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.546629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.546666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.546693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.546721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.546758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.546791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.546814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.546841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.546872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.546895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.546920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.546943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.546966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.546989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.547015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.547038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.547061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.547086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.547117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.547163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.547194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.520 [2024-12-07 05:29:12.547228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.547258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.547287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.547326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.547357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.547390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.547422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.547456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.547482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.547523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.547554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.547583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.547615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.547730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.547767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.547798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.547834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.547870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.547898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.547929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.547959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.548016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.548053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.548086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.548115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.548149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.548179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.548208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.548239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.548276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.548305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.548348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.548376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.548413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.548444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.548474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.548509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.548544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.548572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.548602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.548634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.548668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.548697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.548726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.548755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.548783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.548810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.548834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.548863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.548894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.548924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.548963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.548991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.549024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.549053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.549084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.549114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.549139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.549162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.549186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.549211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.549241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.549268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.549300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.549328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.549356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.549389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.549421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.549466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.549493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.549525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.549558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.549592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.549622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.549651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.549685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.549897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.549927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.549977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.550014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.550041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.550066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.550091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.550114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.550143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.521 [2024-12-07 05:29:12.550173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.550202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.550234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.550267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.550297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.550326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.550350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.550376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.550399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.550425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.550448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.550471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.550495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.550519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.550543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.550566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.550590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.550614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.550637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.550661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.550684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.550708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.550739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.550768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.550796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.550819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.550842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.550865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.550888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.550912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.550936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.550959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.550982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.551007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.551035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.551066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.551094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.551122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.551151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.551174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.551197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.551220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.551245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.551268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.551291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.551314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.551338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.551362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.551385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.551411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.551442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.551473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.551503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.551536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.551567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.551813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.551842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.551874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.551904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.551933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.551964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.551997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.552033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.552072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.552099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.552130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.552160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.552191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.552227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.552261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.552293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.552324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.552354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.552380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.552411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.552443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.552470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.552493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.552516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.552539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.552568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.522 [2024-12-07 05:29:12.552600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.552629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.552660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.552693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.552729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.552762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.552798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.552828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.553059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.553091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.553142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.553175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.553211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.553239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.553283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.553311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.553344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.553372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.553403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.553436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.553469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.553497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.553524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.553552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.553589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.553614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.553648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.553679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.553707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.553736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.553760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.553797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.553825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.553850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.553876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.553902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.553932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.554096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.554128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.554160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.554194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.554224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.554261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.554289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.554335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.554363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.554398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.554432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.554462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.554494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.554522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.554556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.554587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.554634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.554663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.554693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.554725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.554756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.554791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.554824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.554854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.554884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.554925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.554958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.555015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.555047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.555090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.555119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.555148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.555176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.523 [2024-12-07 05:29:12.555205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.555237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.555274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.555302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.555326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.555350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.555376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.555410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.555445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.555473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.555503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.555530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.555569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.555594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.555627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.555654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.555677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.555709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.555747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.555771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.555794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.555818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.555852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.555882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.555912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.555938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.555988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.556023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.556054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.556086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.556116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.556298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.556332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.556362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.556386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.556415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.556447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.556474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.556501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.556528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.556554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.556579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.556609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.556643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.556676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.556706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.556739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.556777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.556809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.556846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.556872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.556904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.556934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.556965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.556995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.557027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.557057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.557080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.557108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.557141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.557173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.557202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.557230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.557259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.557283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.557306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.557329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.557353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.557376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.557399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.557423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.557453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.557485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.557517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.557540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.557563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.557588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.557611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.557635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.557658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.557682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.557705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.557728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.524 [2024-12-07 05:29:12.557750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.557774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.557797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.557820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.557844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.557868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.557891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.557915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.557939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.557963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.557986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.558273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.558305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.558331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.558363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.558399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.558428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.558460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.558486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.558518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.558547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.558579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.558610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.558641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.558671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.558701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.558737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.558767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.558798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.558829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.558861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.558888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.558922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.558951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.558982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.559015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.559042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.559076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.559106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.559136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.559168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.559207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.559236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.559269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.559302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.559337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.559367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.559397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.559429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.559457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.559491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.559524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.559555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.559586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.559617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.559651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.559678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.559705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.559734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.559758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.559785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.559812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.559839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.559865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.559900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.559935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.559963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.559989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.560018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.560046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.560072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.560118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.560145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.560173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.560205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.560480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.560513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:09.525 [2024-12-07 05:29:12.560555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.560583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.560618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.560647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.560682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.560710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.525 [2024-12-07 05:29:12.560739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.560770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.560803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.560834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.560868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.560899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.560932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.560965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.560997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.561041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.561067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.561098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.561134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.561166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.561197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.561226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.561261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.561292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.561348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.561382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.561436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.561465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.561500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.561529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.561567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.561597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.561629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.561658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.561689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.561724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.561758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.561783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.561818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.561854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.561878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.561905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.561933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.561959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.561989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.562019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.562050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.562079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.562109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.562142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.562169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.562193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.562215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.562245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.562272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.562301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.562333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.562368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.562403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.562436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.562460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.562709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.562737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.562762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.562823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.562851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.562882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.562914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.562945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.562970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.563002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.563036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.563065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.563102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.563127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.563159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.563194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.563227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.563258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.563288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.563318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.563348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.563382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.563411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.563458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.563485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.563511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.526 [2024-12-07 05:29:12.563534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.563567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.563593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.563623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.563653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.563684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.563713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.563741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.563770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.563799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.563822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.563846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.563871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.563901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.563929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.563952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.563975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.563999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.564030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.564054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.564077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.564101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.564124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.564149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.564182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.564209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.564240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.564267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.564297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.564321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.564344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.564368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.564394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.564424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.564453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.564484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.564516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.564565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.564814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.564844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.564877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.564910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.564937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.564964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.564997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.565037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.565069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.565093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.565116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.565139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.565163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.565185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.565209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.565233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.565256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.565280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.565304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.565328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.565352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.565378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.565407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.565431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.565455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.565479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.565507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.565539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.565572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.565603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.565631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.565667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.565697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.565725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.565758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.565789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.565824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.565855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.565886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.565917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.565950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.565977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.566005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.527 [2024-12-07 05:29:12.566047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.566080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.566107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.566139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.566197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.566226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.566256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.566291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.566321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.566349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.566377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.566408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.566441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.566477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.566507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.566535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.566565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.566593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.566629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.566659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.566900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.566933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.566961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.566993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.567034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.567063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.567096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.567121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.567152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.567176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.567206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.567235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.567263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.567294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.567323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.567350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.567383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.567413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.567438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.567471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.567494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.567518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.567549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.567576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.567599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.567624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.567655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.567682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.567719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.567747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.567780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.567812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.567846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.567873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.567902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.567926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.567950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.567974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.568003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.568046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.568083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.568116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.568146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.568190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.568219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.568265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.568296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.568328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.528 [2024-12-07 05:29:12.568359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.568391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.568418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.568447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.568479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.568510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.568548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.568578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.568616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.568646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.568676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.568703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.568735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.568765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.568792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.568825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.569067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.569099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.569132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.569163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.569192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.569220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.569251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.569283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.569307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.569331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.569357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.569391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.569429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.569458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.569486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.569517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.569549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.569577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.569603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.569631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.569656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.569684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.569709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.569735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.569766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.569792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.569820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.569853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.569882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.569907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.569938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.569974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.570007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.570046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.570081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.570115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.570147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.570181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.570214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.570246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.570275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.570331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.570362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.570394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.570427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.570458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.570489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.570520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.570547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.570578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.570607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.570641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.570680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.570709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.570738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.570773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.570812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.570840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.570863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.570888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.570916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.570947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.570983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.571229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.571267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.571322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.571349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.571379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.571409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.529 [2024-12-07 05:29:12.571438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.571471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.571503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.571536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.571566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.571601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.571629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.571659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.571687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.571715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.571748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.571777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.571811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.571847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.571876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.571906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.571930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.571955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.571988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.572023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.572054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.572080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.572108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.572132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.572162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.572191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.572214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.572237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.572265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.572293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.572323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.572356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.572393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.572422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.572480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.572511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.572542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.572572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.572601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.572630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.572659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.572686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.572717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.572745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.572779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.572802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.572828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.572860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.572890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.572920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.572943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.572970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.573019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.573050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.573086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.573114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.573142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.573171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.573492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.573525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.573555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.573588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.573618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.573650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.573676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.573712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.573744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.573772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.573808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.573838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.573869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.573895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.573928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.573958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.573994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.574027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.574064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.574100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.574133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.574164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.574192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.574225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.530 [2024-12-07 05:29:12.574254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.574283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.574315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.574344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.574376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.574402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.574425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.574452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.574481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.574510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.574539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.574568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.574601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.574639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.574668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.574692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.574717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.574745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.574772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.574798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.574824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.574852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.574886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.574917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.574948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.574978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.575019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.575050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.575082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.575109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.575143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.575174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.575202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.575240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.575271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.575302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.575330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.575363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.575393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.575647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.575679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.575712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.575750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.575773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.575799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.575833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.575861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.575892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.575917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.575941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.575972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.576014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.576054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.576084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.576111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.576141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.576172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.576202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.576228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.576267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.576297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.576331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.576360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.576391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.576424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.576461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.576492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.576523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.576555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.576586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.576619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.576652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.576703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.576733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.576763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.576794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.576822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.576854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.576882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.576912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.576943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.576973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.577007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.577039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.577072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.577103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.577131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.531 [2024-12-07 05:29:12.577157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.577190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.577220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.577244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.577274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.577313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.577341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.577375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.577409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.577437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.577471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.577503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.577532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.577556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.577585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.577613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.577857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.577891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.577921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.577952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.577983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.578017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.578054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.578083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.578116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.578145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.578178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.578211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.578246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.578272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.578295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.578323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.578351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.578382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.578406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.578433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.578460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.578484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.578510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.578533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.578568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.578599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.578630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.578661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.578690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.578723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.578753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.578783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.578815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.578852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.578880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.578916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.578952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.578979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.579007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.579034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.579061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.579093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.579119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.579147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.579178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.579212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.579238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.579266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.579301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.579336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.579362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.579393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.579424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.579456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.579482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.579506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.579538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.579567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.579596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.579621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.579645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.579669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.579693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.579918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.579956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.579986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.580023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.580060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.532 [2024-12-07 05:29:12.580089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.580118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.580151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.580188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.580223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.580262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.580291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.580323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.580358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.580390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.580421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.580455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.580486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.580517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.580546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.580577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.580614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.580645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.580674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.580706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.580733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.580763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.580792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.580831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.580854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.580878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.580905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.580932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.580963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.580991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.581019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.581047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.581080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.581112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.581158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.581186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.581235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.581266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.581293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.581324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.581353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.581386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.581415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.581443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.581478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.581504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.581532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.581570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.581599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.581652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.581683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.581718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.581751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.581790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.581817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.581850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.581879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.581908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.581941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.582328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.582362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.582392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.582417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.582447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.582487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.582519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.582548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.582580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.582612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.582643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.582674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.582699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.582725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.582753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.582789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.582820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.582847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.582875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.582907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.533 [2024-12-07 05:29:12.582942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.582975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.583001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.583028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.583052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.583075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.583101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.583127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.583153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.583179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.583215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.583252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.583282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.583316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.583354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.583385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.583415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.583448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.583478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.583514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.583756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.583784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.583818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.583847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.583870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.583897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.583932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.583962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.583996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.584985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.585008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.534 [2024-12-07 05:29:12.585035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.585064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.585091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.585121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.585150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.585181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.585241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.585270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.585301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.585336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.585370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.585403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.585434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.585461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.585495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.585533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.585561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.585638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.585666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.585694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.585724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.585755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.585795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.585827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.585864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.585891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.585925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.585953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.585982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.586015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.586054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.586081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.586111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.586142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.586171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.586200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.586226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.586249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.586279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.586308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.586632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.586664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.586696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.586730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.586758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.586787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.586821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.586848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.586876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.586905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.586934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.586961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.586992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.587026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.587057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.587080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.587103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.587128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.587153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.587192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.587225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.587261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.587290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.587322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.587353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.587395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.587431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.587461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.587491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.587524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.587556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.587589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.587643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.587671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.587709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.587738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.587767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.587803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.587835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.587879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.588034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.588064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.535 [2024-12-07 05:29:12.588097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.588127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.588158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.588189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.588227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.588255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.588284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.588315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.588342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.588371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.588405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.588434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.588467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.588502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.588529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.588556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.588581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.588612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.588646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.588672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.588696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.588727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.588760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.588788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.588818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.588847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.588886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.588918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.588947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.588975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.589001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.589029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.589055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.589083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.589113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.589145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.589174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.589207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.589231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.589253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.589282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.589311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.589338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.589374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.589398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.589421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.589445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.589468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.589492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.589530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.589559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.589591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.589622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.589651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.589685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.589716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.589749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.589775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.589815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.589846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.589881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.589919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.589997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.590032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.590055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.590078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.590105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.590132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.590163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.590189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.590220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.590251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.590278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.590310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.590342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.590370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.590400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.590444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.590474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.590504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.590536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.590571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.536 [2024-12-07 05:29:12.590600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.590627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.590656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.590956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.590986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.591975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.592007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.592144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.592180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.592209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.592242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.592271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.592307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.592338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.592367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.592398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.592428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.592490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.592518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.592550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.592579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.592616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.592644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.592680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.592709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.592738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.592772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.592801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.592828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.592860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.592891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.592922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.592957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.592985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.593018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.593049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.593085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.593124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.593154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.593185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.593226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.593257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.593288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.593323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.593347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.593377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.593407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.593439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.593472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.537 [2024-12-07 05:29:12.593503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.593531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.593559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.593595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.593625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.593650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.593676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.593711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.593735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.593759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.593788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.593820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.593852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.593885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.593915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.593942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.593989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.594025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.594056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.594085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.594118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.594148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.594339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.594370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.594398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.594426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.594458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.594487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.594520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.594551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.594584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.594614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.594646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.594677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.594705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:09.538 [2024-12-07 05:29:12.594734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.594765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.594801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.594831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.594864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.594892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.594925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.594955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.594995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.595032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.595071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.595103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.595157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.595188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.595219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.595247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.595281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.595310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.595340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.595369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.595392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.595415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.595446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.595480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.595518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.595552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.595585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.595621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.595651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.595680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.595713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.595739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.595764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.595796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.595829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.595855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.595887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.595918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.595946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.595969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.595993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.596025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.596054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.596083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.596109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.596135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.596162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.596195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.596230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.596260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.538 [2024-12-07 05:29:12.596514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.596540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.596564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.596600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.596630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.596660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.596684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.596716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.596739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.596764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.596804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.596832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.596864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.596891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.596919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.596950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.596979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.597990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.598018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.598042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.598066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.598089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.598113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.598136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.598160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.598183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.598206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.598229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.598532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.598566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.598597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.539 [2024-12-07 05:29:12.598629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.598662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.598719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.598747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.598773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.598806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.598833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.598862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.598896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.598926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.598958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.598988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.599022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.599053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.599082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.599113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.599148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.599181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.599216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.599250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.599282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.599314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.599356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.599388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.599418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.599450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.599476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.599508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.599537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.599569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.599605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.599644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.599678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.599705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.599734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.599768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.599799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.599827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.599856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.599887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.599925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.599954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.599988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.600028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.600061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.600091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.600120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.600150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.600175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.600199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.600229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.600260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.600287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.600323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.600354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.600383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.600422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.600655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.600688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.600723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.600755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.600787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.600817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.600847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.600881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.600912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.600942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.600972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.601003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.601048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.601082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.601129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.601161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.601194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.601224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.601259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.601290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.601328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.540 [2024-12-07 05:29:12.601358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.601391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.601423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.601455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.601485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.601518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.601548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.601576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.601606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.601636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.601680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.601713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.601750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.601778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.601810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.601842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.601880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.601913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.601950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.601982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.602019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.602050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.602080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.602112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.602136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.602159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.602188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.602220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.602258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.602284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.602317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.602347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.602374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.602410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.602441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.602467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.602491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.602519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.602552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.602579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.602611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.602639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.602663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.602739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.602771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.602795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.602972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.603001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.603065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.603096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.603127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.603157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.603186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.603218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.603245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.603274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.603307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.603339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.603365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.603395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.603428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.603460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.603485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.603510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.603538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.603569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.603599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.603632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.603659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.603695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.603726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.603757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.603786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.603819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.603849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.603877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.603910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.603941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.603983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.604015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.604046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.604072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.604105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.604140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.604166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.604192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.604221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.604247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.604271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.604294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.604318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.541 [2024-12-07 05:29:12.604341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.604365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.604388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.604411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.604435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.604459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.604482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.604514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.604541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.604567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.604594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.604625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.604654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.604687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.604723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.604980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.605027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.605071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.605106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.605156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.605184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.605218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.605247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.605277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.605313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.605346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.605379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.605410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.605442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.605470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.605506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.605539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.605575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.605606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.605636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.605667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.605711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.605744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.605798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.605825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.605859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.605888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.605921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.605949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.605983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.606021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.606051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.606077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.606101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.606309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.606342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.606370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.606400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.606428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.606460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.606490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.606514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.606540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.606572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.606609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.606635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.606667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.606692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.606719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.606749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.606780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.606814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.606844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.606876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.606907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.606939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.606971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.607001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.607039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.607071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.607110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.607139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.607173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.607202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.607231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.607264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.607299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.607457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.607492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.607529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.607561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.607600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.607629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.607681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.607717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.542 [2024-12-07 05:29:12.607788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.607820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.607853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.607882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.607912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.607942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.607972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.607995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.608028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.608057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.608089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.608122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.608151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.608183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.608215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.608246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.608277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.608301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.608324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.608347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.608374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.608405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.608431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.608455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.608486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.608532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.608565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.608628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.608661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.608693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.608723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.608785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.608817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.608881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.608909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.608932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.608960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.608994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.609035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.609068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.609092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.609117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.609141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.609170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.609205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.609239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.609272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.609308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.609339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.609369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.609398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.609430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.609602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.609635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.609673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.609704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.609735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.609764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.609794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.609823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.609846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.609872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.609907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.609937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.609970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.609998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.610033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.610066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.610090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.610120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.610151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.610175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.610207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.610239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.610288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.610319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.610380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.610416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.610455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.610487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.610514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.610544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.610575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.610603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.610626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.610653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.610688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.543 [2024-12-07 05:29:12.610727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.610753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.610787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.610817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.610848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.610877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.610909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.610936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.610967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.611017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.611048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.611086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.611118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.611151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.611178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.611208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.611236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.611275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.611307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.611341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.611376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.611411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.611441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.611472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.611502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.611534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.611564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.611597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.611626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.611700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.611731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.611764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.611953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.611978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.612006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.612042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.612073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.612113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.612140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.612167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.612194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.612225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.612257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.612281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.612315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.612344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.612383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.612416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.612451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.612483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.612541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.612572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.612607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.612639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.612677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.612705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.612738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.612766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.612795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.612824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.612854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.612883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.612913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.612941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.612976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.613016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.613057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.613088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.613118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.613141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.613165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.613193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.613220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.613248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.544 [2024-12-07 05:29:12.613277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.613308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.613340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.613372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.613401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.613432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.613461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.613496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.613525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.613561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.613590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.613629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.613657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.613686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.613720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.613751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.613787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.613820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.614087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.614118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.614151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.614181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.614206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.614230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.614263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.614297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.614329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.614366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.614396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.614433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.614464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.614497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.614533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.614557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.614584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.614612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.614638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.614663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.614692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.614727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.614754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.614781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.614814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.614846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.614876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.614907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.614939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.614974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.615002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.615037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.615062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.615091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.615117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.615152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.615184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.615209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.615235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.615265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.615290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.615314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.615337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.615360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.615384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.615414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.615442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.615466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.615490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.615513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.615537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.615561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.615585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.615610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.615634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.615660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.615688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.615716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.615743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.615775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.615806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.615836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.615864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.615895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.615974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.616007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.616047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.616405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.616441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.545 [2024-12-07 05:29:12.616474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.616503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.616537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.616568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.616602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.616635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.616670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.616702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.616732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.616760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.616795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.616826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.616859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.616888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.616917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.616946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.616981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.617018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.617047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.617077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.617106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.617138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.617172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.617196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.617227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.617259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.617322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.617354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.617401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.617432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.617462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.617490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.617560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.617611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.617643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.617680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.617712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.617745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.617780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.617810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.617842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.617875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.617907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.617950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.617981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.618025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.618054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.618089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.618120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.618152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.618182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.618214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.618249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.618275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.618305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.618338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.618371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.618403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.618560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.618593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.618623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.618650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.618676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.618707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.618738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.618767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.618799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.618829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.618861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.618895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.618927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.618951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.618977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.619004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.619037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.619065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.619096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.619133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.619164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.619193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.619221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.619256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.619280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.619305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.619329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.619354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.619384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.619415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.619452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.619482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.619517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.619548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.619585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.619618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.619652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.619681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.619734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.619763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.546 [2024-12-07 05:29:12.619801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.619831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.619855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.619884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.619916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.619946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.619970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.619994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.620025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.620050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.620074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.620098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.620122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.620146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.620171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.620204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.620236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.620262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.620287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.620310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.620333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.620357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.620381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.620410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.620486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.620516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.620568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.620692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.620728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.620762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.620788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.620826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.620859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.620889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.620918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.620955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.620987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.621030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.621067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.621098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.621132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.621166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.621204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.621238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.621269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.621297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.621327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.621362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.621400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.621430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.621466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.621493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.621529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.621565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.621594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.621620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.621644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.621679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.621712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.621741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.621778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.621808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.621840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.621869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.621898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.621931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.621971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.622002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.622039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.622075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.622105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.622137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.622166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.622195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.622230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.622261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.622289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.622317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.622378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.622408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.622467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.622501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.622535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.622567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.622622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.622658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.622711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.622962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.622995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.623033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.623062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.623092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.623122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.623156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.623182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.547 [2024-12-07 05:29:12.623215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.623247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.623276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.623309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.623344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.623379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.623410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.623440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.623473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.623498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.623529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.623556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.623594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.623625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.623655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.623683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.623713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.623743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.623771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.623798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.623821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.623846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.623870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.623895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.623919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.623950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.623978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.624014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.624048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.624081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.624113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.624145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.624177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.624209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.624241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.624279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.624309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.624339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.624366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.624426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.624460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.624490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.624521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.624550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.624582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.624605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.624629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.624660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.624689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.624718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.624753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.624783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.624813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.624847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.624879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.624911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.625408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.625447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.625480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.625524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.625551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.625582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.625610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.625640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.625669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.625709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.625741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.625772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.625800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.625828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.625852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.625879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.625904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.625928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.625951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.625976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.626000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.626029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.626060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.626089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.626117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.626143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.626167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.626191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.626215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.626239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.626262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.626286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.626310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.626333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.626357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.626381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.626405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.626429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.626453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.626477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.626501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.626526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.548 [2024-12-07 05:29:12.626554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.626589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.626620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.626649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.626678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.626706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.626735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.626766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.626796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.626831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.626863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.626899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.626934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.626991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.627026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.627090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.627128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.627163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.627193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.627254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.627283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.627376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.627412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.627449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.627478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.627507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.627537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.627565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.627608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.627639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.627675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.627705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.627738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.627772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.627805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.627833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.627860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.627895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.627925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.627959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.627995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.628028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.628053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.628081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.628113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.628141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.628175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.628204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.628230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.628262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.628296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.628329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.628360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.628389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.628419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.628457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.628486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.628518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.628551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.628610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.628645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.628685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.628714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.628747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.628781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.628812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.628841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.628872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.628901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.628934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.628966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.629001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.629040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.629079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.629110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.629145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.629175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.629206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.629236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.629275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.629308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.629347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.629380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.629422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.549 [2024-12-07 05:29:12.629452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.629687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.629724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.629751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.629783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.629823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.629851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.629881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.629910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:09.550 [2024-12-07 05:29:12.629939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.629963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.629994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.630026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.630058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.630092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.630127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.630157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.630188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.630216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.630246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.630275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.630299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.630326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.630349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.630376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.630400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.630427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.630459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.630489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.630519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.630577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.630609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.630646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.630679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.630713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.630745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.630776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.630807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.630831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.630855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.630884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.630915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.630941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.630971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.631000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.631028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.631052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.631076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.631099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.631123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.631146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.631172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.631200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.631230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.631261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.631293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.631320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.631347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.631371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.631396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.631419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.631443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.631467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.631493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.631717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.631743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.631767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.631791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.631816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.631839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.631864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.631888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.631912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.631936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.631960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.631984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.632008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.632036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.632059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.632083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.632107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.632132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.632156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.632183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.632217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.632250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.632278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.632308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.632340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.632373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.632399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.632428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.632460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.632497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.632531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.632564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.632593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.632624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.632653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.632682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.632712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.632743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.550 [2024-12-07 05:29:12.632778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.632809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.632840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.632871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.632906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.632938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.632969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.633003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.633045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.633098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.633129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.633170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.633204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.633239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.633269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.633306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.633341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.633375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.633403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.633440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.633471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.633518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.633549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.633607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.633636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.633670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.633910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.633941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.633971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.634001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.634030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.634067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.634101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.634134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.634170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.634200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.634232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.634261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.634290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.634325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.634354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.634378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.634402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.634429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.634461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.634501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.634536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.634565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.634594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.634623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.634651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.634682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.634720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.634751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.634791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.634821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.634852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.634881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.634921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.634953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.634991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.635025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.635056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.635090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.635117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.635146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.635177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.635206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.635235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.635268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.635300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.635337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.635372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.635404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.635431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.635464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.635494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.635558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.635592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.635629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.635660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.635690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.635724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.635758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.635788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.635825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.635854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.635883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.635915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.636161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.636194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.636224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.636260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.636291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.636315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.636343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.636374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.636402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.636433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.636466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.551 [2024-12-07 05:29:12.636490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.636515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.636539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.636571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.636601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.636626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.636651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.636674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.636698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.636723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.636747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.636770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.636794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.636819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.636842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.636868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.636891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.636914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.636938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.636962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.636985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.637018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.637047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.637072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.637097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.637121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.637144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.637168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.637193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.637216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.637241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.637264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.637288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.637312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.637335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.637362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.637390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.637421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.637452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.637486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.637518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.637546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.637578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.637604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.637640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.637671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.637702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.637736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.637770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.637799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.637831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.637861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.637892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.638134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.638169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.638198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.638222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.638255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.638285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.638313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.638341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.638372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.638403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.638433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.638466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.638496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.638528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.638565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.638591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.638615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.639209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.639242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.639272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.552 [2024-12-07 05:29:12.639326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.639357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.639405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.639436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.639472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.639500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.639553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.639589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.639618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.639647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.639680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.639711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.639745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.639776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.639809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.639838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.639873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.639906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.639968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.639998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.640039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.640072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.640104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.640131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.640171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.640200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.640227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.640254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.640286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.640318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.640349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.640376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.640402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.640436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.640466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.640496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.640526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.640556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.640583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.640613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.640647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.640680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.640704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.640784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.640813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.640843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.640874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.640900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.640936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.640966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.640992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.641022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.641054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.641080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.641106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.641131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.641157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.641186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.641218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.641254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.641286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.641321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.641350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.641382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.641415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.641452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.641482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.641513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.641549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.641579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.641609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.641640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.641676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.641711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.641743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.641773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.641804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.641838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.641871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.641902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.641931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.641960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.641992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.642030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.642066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.642093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.642127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.642157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.642185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.642213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.642246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.642277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.642304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.642338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.553 [2024-12-07 05:29:12.642371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.642404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.642439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.642473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.642504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.642539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.642571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.642597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.642625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.642650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.642676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.642707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.642736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.642948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.642974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.642999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.643035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.643065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.643099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.643137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.643160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.643184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.643214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.643244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.643276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.643301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.643325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.643349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.643374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.643398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.643422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.643445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.643468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.643492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.643515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.643538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.643561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.643585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.643609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.643643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.643672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.643703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.643733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.643760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.643795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.643829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.643861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.643899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.643931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.643966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.643994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.644035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.644065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.644099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.644136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.644161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.644193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.644218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.644246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.644279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.644305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.644329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.644353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.644377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.644400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.644424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.644450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.644480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.644510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.644539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.644563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.644587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.644611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.644634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.644658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.644682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.644930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.644964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.644994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.645035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.645067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.645094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.645127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.554 [2024-12-07 05:29:12.645163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.645187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.645212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.645236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.645260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.645284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.645308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.645332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.645355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.645379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.645403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.645427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.645450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.645474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.645497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.645521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.645550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.645575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.645599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.645622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.645646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.645669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.645693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.645716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.645740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.645764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.645788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.646024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.646056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.646086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.646119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.646148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.646179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.646208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.646242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.646274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.646306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.646338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.646372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.646406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.646445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.646475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.646512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.646546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.646577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.646608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.646648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.646707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.646738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.646769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.646800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.646832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.646864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.646894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.646926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.646958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.646989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.647475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.647508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.647542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.647576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.647611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.647640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.647671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.647702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.647731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.647767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.647793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.647819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.647848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.647883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.647911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.647942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.647972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.648024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.648053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.648089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.648120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.648153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.648181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.648209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.648242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.648273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.648306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.648340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.648372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.648402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.648436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.648469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.648500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.648534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.648565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.648599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.648637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.648669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.648708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.648737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.648769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.555 [2024-12-07 05:29:12.648798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.648830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.648868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.648892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.648916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.648941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.648964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.648991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.649033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.649062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.649094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.649126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.649161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.649194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.649229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.649260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.649287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.649319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.649353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.649392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.649427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.649460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.649564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.649600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.649629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.649663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.649700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.649729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.649753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.649785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.649815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.649845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.649880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.649908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.649939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.649969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.649996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.650977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.651000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.651028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.651052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.651075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.651099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.651122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.651146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.651170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.651193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.651433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.651461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.651485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.651508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.651534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.651564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.651592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.651625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.651648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.651680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.651712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.651743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.556 [2024-12-07 05:29:12.651777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.651806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.651838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.651870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.651900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.652127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.652162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.652191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.652239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.652272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.652307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.652337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.652374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.652406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.652441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.652472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.652522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.652552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.652581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.652613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.652643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.652676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.652703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.652732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.652761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.652827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.652852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.652914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.652941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.652968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.652998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.653039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.653073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.653107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.653134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.653165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.653195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.653224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.653261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.653294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.653324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.653352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.653380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.653409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.653432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.653463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.653493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.653522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.653551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.653578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.653611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.653738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.653767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.653796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.653828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.653857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.653888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.653920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.653950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.653976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.654018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.654075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.654107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.654163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.654195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.654251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.654280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.654313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.654341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.654372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.654403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.654434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.654467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.654493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.654528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.654563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.654616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.654645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.654680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.654710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.654737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.654773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.654803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.654838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.654869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.654900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.654929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.654961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.654991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.655029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.655055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.655087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.655119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.655151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.655183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.655214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.655244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.655272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.557 [2024-12-07 05:29:12.655306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.655345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.655380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.655408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.655441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.655473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.655497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.655520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.655549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.655580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.655613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.655650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.655676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.655705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.655736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.655768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.655797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.655936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.655971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.655994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.656024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.656047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.656070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.656093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.656117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.656143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.656167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.656190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.656214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.656237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.656260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.656283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.656306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.656329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.656645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.656672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.656696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.656719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.656743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.656766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.656789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.656812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.656835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.656857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.656881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.656904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.656928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.656950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.656974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.656997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.558 [2024-12-07 05:29:12.657904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.657928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.657952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 true 00:14:09.559 [2024-12-07 05:29:12.657975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.657999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.658026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.658050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.658075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.658099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.658124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.658147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.658943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.658979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.659008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.659040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.659072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.659112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.659143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.659178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.659206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.659239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.659270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.659301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.659329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.659359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.659393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.659424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.659475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.659506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.659540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.659571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.659608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.659637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.659670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.659700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.659734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.659766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.659796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.659825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.659858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.659889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.659922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.659956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.659985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.660013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.660044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.660073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.660098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.660122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.660150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.660183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.660211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.660243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.660279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.660309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.660342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.660368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.660396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.660488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.660520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.660551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.660583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.660622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.660649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.660682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.660713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.660744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.660771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.660801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.660834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.660868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.660907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.660938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.661004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.661044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.661079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.661138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.661169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.661200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.661231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.661265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.661293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.661324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.661352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.661380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.661413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.661444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.661474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.661506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.661547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.661578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.661609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.661637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.661670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.661705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.661734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.661765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.661796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.559 [2024-12-07 05:29:12.661826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.661858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.661887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.661914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.661949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.661979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.662016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.662046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.662077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.662105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.662139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.662167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.662190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.662215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.662244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.662276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.662313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.662344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.662378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.662412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.662441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.662472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.662506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.662529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.662599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.662623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.662646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.662670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.662693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.662716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.662740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.662764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.662787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.662811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.662834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.662857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.662884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.662908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.662931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.662954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.663418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.663444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.663467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.663491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.663514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.663537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.663560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.663583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.663607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.663630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.663654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.663677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.663701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.663724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.663747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.663770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.663794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:09.560 [2024-12-07 05:29:12.663818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.663841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.663864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.663888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.663911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.663935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.663958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.663981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.664004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.664031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.664055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.664078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.664101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.664124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.664147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.664173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.664195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.664218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.664241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.664264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.664287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.664317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.664346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.664372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.664405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.664437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.664465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.664489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.664516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.664547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.664670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.664702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.664735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.664767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.664804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.664841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.664872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.664900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.664932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.664960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.560 [2024-12-07 05:29:12.664990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.665020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.665048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.665072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.665095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.665118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.665142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.665634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.665673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.665704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.665736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.665765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.665816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.665849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.665887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.665917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.665947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.665981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.666032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.666061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.666092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.666126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.666157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.666184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.666213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.666241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.666268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.666304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.666335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.666371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.666402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.666431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.666463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.666490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.666521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.666553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.666589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.666620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.666651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.666681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.666710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.666741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.666771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.666802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.666830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.666861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.666888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.666989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.667023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.667052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.667081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.667113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.667149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.667175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.667206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.667236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.667271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.667295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.667322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.667352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.667385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.667415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.667449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.667477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.667501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.667530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.667560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.667595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.667623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.667650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.667683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.667713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.667747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.667778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.667816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.667845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.667881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.667913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.667944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.667979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.668005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.668041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.668071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.668109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.668137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.668166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.668196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.668248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.668278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.668308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.668343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.668377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.668411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.668444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.668471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.668505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.668538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.668573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.561 [2024-12-07 05:29:12.668603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.668633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.668668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.668697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.668724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.668748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.668775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.668803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.668833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.668868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.668897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.668925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.668957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.669052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.669077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.669111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.669144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.669174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.669204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.669235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.669292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.669323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.669355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.669384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.669413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.669446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.669474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.669507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.669535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.669567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.669598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.669621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.669644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.669677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.669706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.669736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.670052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.670088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.670121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.670152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.670182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.670218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.670246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.670275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.670299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.670322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.670352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.670378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.670405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.670433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.670463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.670491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.670517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.670548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.670578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.670607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.670641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.670669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.670701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.670734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.670762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.670790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.670820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.670846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.670870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.670894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.670917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.670941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.670964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.670987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.671014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.671038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.671062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.671085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.671110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.671133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.671240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.671265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.671288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.671312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.671335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.562 [2024-12-07 05:29:12.671358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.671382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.671405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.671429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.671452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.671476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.671499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.671522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.671545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.671569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.671601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.671628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.671656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.671685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.671714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.671781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.671814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.671854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.671881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.671912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.671941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.671973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.672006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.672041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.672071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.672101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.672139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.672171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.672198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.672231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.672290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.672320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.672349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.672380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.672418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.672449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.672487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.672519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.672547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.672582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.672614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.672644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.672675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.672706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.672733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.672761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.672789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.672821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.672845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.672881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.672912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.672935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.672961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.672989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.673020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.673049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.673084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.673117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.673144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.673337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.673366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.673393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.673422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.673457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.673489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.673517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.673548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.673581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.673642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.673672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.673721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.673753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.673789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.673825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.673858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.673888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.673918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.673947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.673977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.674015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.674047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.674101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.674330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.674366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.674399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.563 [2024-12-07 05:29:12.674459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.674490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.674523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.674557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.674588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.674625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.674657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.674689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.674724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.674753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.674786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.674816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.674840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.674864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.674888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.674919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.674953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.674992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.675033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.675066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.675095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.675125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.675157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.675183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.675208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.675232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.675255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.675278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.675302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.675329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.675361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.675388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.675417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.675447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.675479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.675509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.675543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.675677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.675711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.675740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.675767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.675790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.675815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.675844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.675876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.675910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.675942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.675973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.676989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.677017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.677046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.677069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.677092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.677115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.677139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.677163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.677186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.677209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.677233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.564 [2024-12-07 05:29:12.677256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.677280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.677304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.677380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.677405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.677432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.677463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.677496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.677526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.677555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.677584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.677617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.677651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.677682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.677713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.677743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.677775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.677806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.677838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.677870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.677901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.677931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.677959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.677989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.678024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.678054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.678369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.678402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.678434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.678466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.678500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.678539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.678575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.678606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.678636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.678665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.678694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.678728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.678757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.678788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.678819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.678849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.678882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.678910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.678941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.678973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.679008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.679039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.679069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.679101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.679130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.679160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.679196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.679231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.679257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.679280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.679304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.679333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.679364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.679394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.679431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.679468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.679497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.679524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.679558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.679591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.679724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.679757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.679791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.679822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.679869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.679900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.679937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.679966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.680007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.680041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.680086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.680116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.680149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.680177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.680205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.680237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.680264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.680298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.680329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.680363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.680398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.680430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.680460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.680486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.680519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.680551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.680581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.680612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.680653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.680685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.680714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.680744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.680774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.565 [2024-12-07 05:29:12.680805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.680839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.680874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.680906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.680955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.680989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.681030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.681060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.681088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.681119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.681147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.681182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.681215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.681243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.681266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.681290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.681319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.681355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.681389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.681417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.681447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.681481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.681513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.681537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.681560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.681584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.681608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.681632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.681656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.681679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.681702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.681793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.681816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.681840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.681864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.681887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.681909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.681933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.681958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.681991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.682028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.682062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.682095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.682123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.682147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.682171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.682195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.682218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.682241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.682263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.682287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.682310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.682333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.682357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.682549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.682575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.682599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.682622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.682645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.682668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.682692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.682715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.682739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.682763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.682786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.682809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.682834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.682857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.682881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.682903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.682927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.682950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.682975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.682999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.683027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.683051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.683075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.683098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.683122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 05:29:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:09.566 [2024-12-07 05:29:12.683144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.683175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.683199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.683222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.683245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.683269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.683291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.683314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.683337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.683360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.683384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.683407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.683431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.683456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 [2024-12-07 05:29:12.683489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.566 05:29:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.566 [2024-12-07 05:29:12.683910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.683965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.683997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.684034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.684064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.684119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.684145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.684172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.684205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.684242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.684271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.684305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.684333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.684366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.684399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.684429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.684471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.684500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.684533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.684562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.684594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.684625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.684662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.684693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.684726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.684758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.684788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.684816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.684848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.684883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.684913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.684947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.684976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.685003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.685175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.685203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.685228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.685263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.685292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.685324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.685360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.685391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.685423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.685457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.685483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.685511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.685537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.685560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.685592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.685625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.685659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.685691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.685723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.685761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.685794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.685829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.685859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.685913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.685944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.686006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.686042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.686080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.686112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.686144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.686174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.686200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.686233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.686263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.686295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.686329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.686358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.686388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.686423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.686454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.686484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.686514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.686550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.686579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.686612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.686643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.686677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.686710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.686739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.686767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.686796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.686829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.686861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.567 [2024-12-07 05:29:12.687038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.687070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.687106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.687134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.687168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.687198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.687231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.687259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.687282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.687309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.687337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.687365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.687397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.687422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.687456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.687492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.687520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.687550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.687584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.687607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.687633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.687656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.687679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.687703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.687726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.687750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.687773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.687797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.687820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.687842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.687866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.687889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.687912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.687936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.687959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.687985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.688983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.689268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.689294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.689318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.689341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.689365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.689388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.689412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.689435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.689459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.689482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.689505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.689530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.689553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.689578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.689601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.689627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.689657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.689689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.689717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.689748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.689784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.689816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.689849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.689878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.568 [2024-12-07 05:29:12.689908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.689939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.690001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.690039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.690073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.690100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.690128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.690156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.690205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.690239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.690267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.690296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.690330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.690362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.690394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.690426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.690458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.690486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.690515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.690550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.690577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.690609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.690637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.690670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.690702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.690756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.690782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.690820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.690850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.690949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.690985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.691038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.691074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.691100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.691132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.691164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.691192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.691220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.691252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.691281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.691309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.691338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.691366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.691390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.691423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.691460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.691491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.691524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.691559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.691586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.691616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.691642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.691677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.691709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.691732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.691758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.691782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.691814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.691844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.691873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.691903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.691937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.691969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.692008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.692038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.692073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.692104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.692135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.692167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.692196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.692230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.692261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.692291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.692319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.692350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.692381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.692423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.692452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.692487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.692521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.692552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.692583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.692615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.692646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.692679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.692709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.692740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.692774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.692806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.692844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.692876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.692908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.692939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.693165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.693200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.693229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.693259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.693289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.569 [2024-12-07 05:29:12.693314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.693338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.693371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.693403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.693434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.693466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.693496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.693522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.693553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.693591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.693624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.693651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.693674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.693704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.693739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.693769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.693805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.693835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.693859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.693882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.693908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.693932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.693955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.693979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.694004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.694033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.694065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.694089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.694112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.694137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.694161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.694184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.694207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.694230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.694253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.694277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.694300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.694324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.694347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.694370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.694393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.694417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.694440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.694464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.694488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.694513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.694543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.694574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.694606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.694639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.694666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.694698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.694726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.694763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.694792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.694822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.694850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.694881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.695130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.695163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.695192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.695222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.695253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.695284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.695316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.695346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.695378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.695408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.695438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.695468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.695502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.695537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.695567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.695598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.695628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.695657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.695685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.695709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.695733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.695757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.695780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.695804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.695828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.695851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.695874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.695898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.695922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.695946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.695970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.695993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.696034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.696068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.696295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.696324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.696353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.696383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.696413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.696447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.570 [2024-12-07 05:29:12.696475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.696511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.696546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.696583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.696611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.696640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.696673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.696700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.696733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.696766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.696805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.696833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.696864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.696901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.696932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.696975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.697004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.697046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.697075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.697103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.697131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.697163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.697191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.697223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.697439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.697473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.697506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.697535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.697565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:09.571 [2024-12-07 05:29:12.697604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.697637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.697661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.697686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.697713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.697742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.697771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.697802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.697835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.697861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.697891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.697923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.697958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.697984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.698007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.698036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.698061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.698085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.698108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.698134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.698168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.698198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.698236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.698268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.698326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.698358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.698396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.698429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.698474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.698505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.698555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.698584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.698616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.698645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.698673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.698704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.698735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.698764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.698795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.698831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.698855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.698878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.698910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.698944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.698973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.699001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.699037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.699070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.699097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.699129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.699164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.699200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.699224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.699248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.699273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.699300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.699332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.699365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.699526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.699560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.699594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.699622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.699649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.699679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.699714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.699743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.699774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.699804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.699834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.571 [2024-12-07 05:29:12.699860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.699889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.699921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.699944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.699969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.699992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.700019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.700043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.700067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.700090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.700114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.700137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.700160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.700183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.700206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.700229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.700253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.700280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.700308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.700339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.700367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.700391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.700414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.700437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.700468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.700498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.700528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.700561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.700588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.700621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.700653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.700709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.700741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.700770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.700802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.700866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.700899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.700942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.700971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.701002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.701047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.701084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.701118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.701150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.701181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.701209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.701248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.701276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.701308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.701336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.701366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.701398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.701429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.701688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.701723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.701762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.701790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.701814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.701842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.701871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.701903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.701931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.701964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.701988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.702021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.702048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.702078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.702104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.702128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.702154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.702193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.702225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.702255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.702286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.702317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.702346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.702377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.702408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.702436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.702472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.702504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.702532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.702564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.702595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.702628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.702654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.702687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.572 [2024-12-07 05:29:12.702918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.702949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.702980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.703003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.703031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.703066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.703097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.703130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.703165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.703192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.703215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.703247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.703276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.703310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.703340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.703374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.703404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.703442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.703473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.703506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.703532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.703560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.703591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.703621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.703649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.703676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.703713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.703742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.703774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.703944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.703979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.704028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.704062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.704099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.704126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.704157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.704187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.704220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.704253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.704279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.704309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.704339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.704370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.704397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.704431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.704468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.704498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.704521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.704545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.704582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.704610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.704642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.704675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.704703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.704732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.704760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.704783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.704813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.704841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.704867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.704891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.704916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.704943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.704967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.704992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.705022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.705062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.705095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.705129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.705160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.705201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.705233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.705268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.705301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.705333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.705362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.705396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.705427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.705458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.705494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.705524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.705578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.705607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.705644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.705672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.705706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.705738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.705766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.705796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.705828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.705861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.705888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.705924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.706108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.706136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.573 [2024-12-07 05:29:12.706163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.706195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.706227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.706257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.706283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.706313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.706351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.706385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.706420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.706454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.706498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.706529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.706564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.706593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.706623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.706652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.706680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.706714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.706746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.706778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.706807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.706861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.706893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.706927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.706955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.706990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.707026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.707056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.707083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.707113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.707142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.707174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.707204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.707234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.707263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.707295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.707328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.707360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.707419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.707450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.707483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.707512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.707544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.707576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.707608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.707641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.707672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.707701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.707733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.707770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.707798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.707835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.707869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.707896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.707925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.707957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.707988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.708021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.708053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.708076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.708104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.708362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.708389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.708412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.708441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.708469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.708496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.708529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.708557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.708587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.708610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.708634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.708659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.708688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.708714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.708737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.708761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.708784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.708810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.708834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.708857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.708880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.708903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.708927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.708950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.708973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.708997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.709029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.709058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.709087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.709122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.709155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.709204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.709233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.709264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.574 [2024-12-07 05:29:12.709296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.709324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.709365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.709401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.709429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.709454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.709486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.709517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.709545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.709575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.709605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.709629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.709652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.709684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.709711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.709741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.709771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.709794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.709818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.709841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.709864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.709887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.709910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.709935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.709960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.709985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.710008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.710043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.710076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.710112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.710325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.710350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.710373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.710397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.710420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.710443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.710466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.710490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.710515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.710543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.710573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.710603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.710626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.710650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.710674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.710698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.710721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.710743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.710766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.710790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.710813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.710837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.710860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.710883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.710907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.710930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.710954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.710977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.711001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.711028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.711052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.711076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.711100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.711124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.711625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.711656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.711691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.711719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.711757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.711792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.711827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.711861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.711908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.711942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.711985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.712017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.712068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.712094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.712123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.712150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.712187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.712218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.712251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.712278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.712313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.712346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.712379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.712423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.712452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.712484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.712515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.712543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.712574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.712728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.712761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.712791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.712822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.712851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.712879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.575 [2024-12-07 05:29:12.712907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.576 [2024-12-07 05:29:12.712939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.576 [2024-12-07 05:29:12.712974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.576 [2024-12-07 05:29:12.712997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.576 [2024-12-07 05:29:12.713032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.576 [2024-12-07 05:29:12.713067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.576 [2024-12-07 05:29:12.713105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.576 [2024-12-07 05:29:12.713131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.576 [2024-12-07 05:29:12.713160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.576 [2024-12-07 05:29:12.713193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.576 [2024-12-07 05:29:12.713221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.576 [2024-12-07 05:29:12.713249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.713278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.713304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.713328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.713355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.713379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.713403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.713437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.713465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.713495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.713528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.713555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.713586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.713616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.713676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.713708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.713742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.713773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.713814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.713847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.713881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.713910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.713942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.714088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.714123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.714156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.714184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.714213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.714237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.714265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.714296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.714324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.714352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.714382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.714406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.714430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.714455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.714484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.714514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.714542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.714571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.714603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.714633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.714662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.714695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.714728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.714762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.714795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.714824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.864 [2024-12-07 05:29:12.714855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.714886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.714916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.714945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.714982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.715017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.715052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.715082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.715137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.715165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.715195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.715228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.715261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.715292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.715319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.715347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.715370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.715399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.715434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.715461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.715496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.715531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.715554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.715580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.715611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.715639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.715663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.715687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.715717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.715746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.715770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.715800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.715980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.716019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.716047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.716102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.716131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.716162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.716192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.716223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.716258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.716289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.716320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.716350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.716395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.716425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.716458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.716493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.716520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.716552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.716578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.716601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.716626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.716656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.716686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.716716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.716746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.716781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.716817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.716844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.716867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.716891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.716925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.716960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.716986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.717022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.717061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.717091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.717157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.717188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.717218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.717246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.717277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.717308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.717337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.717372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.717405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.717437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.717469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.717502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.717534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.717577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.717604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.717633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.717662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.717691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.717727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.717755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.717788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.717823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.717861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.717885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.717910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.717940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.717969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.717996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.718076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.718100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.865 [2024-12-07 05:29:12.718124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.718151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.718181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.718379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.718431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.718457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.718493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.718525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.718554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.718587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.718618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.718650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.718678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.718710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.718740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.718786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.718814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.718845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.718880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.718909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.718942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.718970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.719000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.719037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.719068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.719097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.719145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.719172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.719199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.719231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.719261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.719290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.719314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.719338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.719371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.719402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.719432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.719461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.719496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.719528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.719559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.719586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.719616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.719641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.719666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.719696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.719723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.719747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.719776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.719804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.719830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.719861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.719891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.719922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.719952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.719980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.720017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.720046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.720074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.720103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.720134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.720371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.720395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.720419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.720442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.720470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.720498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.720531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.720557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.720604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.720633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.720664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.720692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.720720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.720754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.720783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.720815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.720845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.720878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.720909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.720963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.720991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.721031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.721064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.721092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.721122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.721155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.721184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.721219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.721250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.721283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.721313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.721344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.721370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.721397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.721421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.721444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.721471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.866 [2024-12-07 05:29:12.721506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.721539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.721570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.721598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.721627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.721650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.721674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.721697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.721720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.721744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.721773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.721803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.721833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.721860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.721884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.721914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.721946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.721978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.722016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.722044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.722076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.722102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.722130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.722161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.722192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.722223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.722270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.722347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.722386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.722418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.722449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.722477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.722844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.722887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.722920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.722954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.722985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.723024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.723059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.723091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.723123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.723153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.723178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.723209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.723235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.723263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.723294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.723322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.723351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.723387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.723416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.723440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.723465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.723492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.723523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.723552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.723580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.723611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.723642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.723673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.723704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.723753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.723780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.723813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.723848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.723883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.723916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.723956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.723988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.724024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.724054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.724085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.724119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.724160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.724191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.724222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.724254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.724291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.724321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.724353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.724382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.724417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.724449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.724476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.724504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.724538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.724572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.724598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.724622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.724652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.724747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.724772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.724795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.724823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.724846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.867 [2024-12-07 05:29:12.724875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.724912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.724943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.724993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.725030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.725067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.725105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.725134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.725171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.725198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.725228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.725257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.725293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.725324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.725353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.725381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.725413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.725436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.725460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.725488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.725519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.725556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.725585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.725608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.725637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.725663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.725687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.725710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.725734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.725757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.725780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.725806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.725833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.725860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.725890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.725914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.725937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.725960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.725983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.726007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.726034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.726058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.726080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.726103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.726127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.726151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.726174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.726206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.726233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.726258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.726286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.726324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.726353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.726410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.726440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.726472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.726502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.726534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.726563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.726640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.726671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.726699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.726732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.726761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.727062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.727087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.727112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.727135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.727159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.727182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.727206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.727229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.727252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.727275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.727300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.727325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.727348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.727371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.727401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.727435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.727467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.727537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.727571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.727601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.727657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.727690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.727722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.727751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.727779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.727809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.727840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.727878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.868 [2024-12-07 05:29:12.727906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.727939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.727970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.728003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.728034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.728068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.728096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.728129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.728158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.728187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.728219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.728250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.728307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.728336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.728368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.728396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.728428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.728465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.728492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.728521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.728546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.728579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.728608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.728639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.728671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.728702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.728735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.728769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.728802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.728836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.728959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.728994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.729041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.729074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.729099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.729125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.729156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.729185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.729212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.729244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.729277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.729306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.729342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.729376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.729409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.729435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.729459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.729486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.729513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.729536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.729563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.729592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.729622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.729660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.729687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.729723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.729752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.729787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.729818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.729853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.729883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.729914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.729948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.729976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.730007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.730041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.730075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.730107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.730136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.730168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.730215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.730246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.730277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.730309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.730344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.730371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.730401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.730436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.730467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.730506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.730534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.730565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.730595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.730637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.730670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.730700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.869 [2024-12-07 05:29:12.730730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.730771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.730802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.730831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.730864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.730895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.730932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.730962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.731143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.731178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.731212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.731239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.731270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.731302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.731335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.731361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.731390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.731424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.731447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.731472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.731502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.731532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.731567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.731603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.731627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.731653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.731676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.731699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.731723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.731753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.731778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.731808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.731832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.731859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.731883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.731907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.731931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.731954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:09.870 [2024-12-07 05:29:12.731977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.732001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.732031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.732055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.732079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.732102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.732125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.732149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.732172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.732195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.732218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.732241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.732267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.732294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.732321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.732344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.732367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.732391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.732414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.732437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.732461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.732484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.732508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.732532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.732556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.732587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.732616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.732646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.732674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.732705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.732734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.732762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.732800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.733067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.733100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.733143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.733178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.733214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.733247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.733285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.733318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.733348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.733380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.733414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.733439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.733470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.733505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.733533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.733561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.733591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.733617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.733648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.733679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.733714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.733739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.733762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.733785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.733808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.733832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.870 [2024-12-07 05:29:12.733855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.733880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.733903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.733927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.733950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.733973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.733997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.734023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.734046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.734069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.734093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.734116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.734140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.734163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.734186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.734209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.734232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.734255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.734278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.734302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.734325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.734348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.734371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.734394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.734417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.734441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.734464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.734488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.734518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.734547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.734576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.734622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.734656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.734689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.734723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.734773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.734807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.734856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.735106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.735144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.735175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.735206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.735235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.735270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.735300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.735328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.735355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.735385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.735417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.735446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.735481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.735512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.735551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.735583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.735619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.735647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.735675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.735707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.735736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.735772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.735800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.735832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.735865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.735897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.735925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.735958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.735995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.736026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.736054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.736092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.736121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.736150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.736363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.736394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.736424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.736456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.736483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.736515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.736544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.736573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.736600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.736623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.736654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.736682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.736737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.736769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.736820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.736849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.736881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.736911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.736937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.736968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.736998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.737041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.737071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.737105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.737137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.737165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.871 [2024-12-07 05:29:12.737197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.737230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.737263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.737407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.737440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.737475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.737505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.737532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.737566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.737596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.737622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.737645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.737671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.737697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.737723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.737746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.737776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.737812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.737838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.737869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.737903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.737933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.737967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.737996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.738046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.738077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.738115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.738142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.738179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.738210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.738238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.738272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.738301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.738335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.738366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.738403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.738435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.738466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.738492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.738515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.738542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.738571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.738602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.738637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.738664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.738692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.738723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.738757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.738784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.738815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.738839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.738862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.738888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.738914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.738943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.738971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.739001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.739038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.739069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.739092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.739121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.739151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.739181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.739211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.739239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.739269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.739299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.739458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.739489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.739523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.739554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.739585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.739608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.739631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.739654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.739693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.739723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.739752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.739781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.739808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.739840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.739868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.739904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.739933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.739962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.739991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.740026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.740057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.740090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.740123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.740153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.740187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.740215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.740248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.740277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.740313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.740342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.740371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.872 [2024-12-07 05:29:12.740397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.740430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.740463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.740493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.740551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.740580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.740612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.740646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.740674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.740701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.740728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.740760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.740791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.740819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.740843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.740866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.740900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.740936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.740966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.740996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.741034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.741061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.741090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.741117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.741143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.741166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.741195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.741226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.741257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.741284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.741315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.741340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.741590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.741625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.741658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.741710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.741738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.741771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.741803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.741836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.741871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.741905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.741936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.741969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.742000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.742035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.742077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.742107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.742132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.742164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.742199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.742229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.742252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.742275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.742301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.742332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.742368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.742394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.742419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.742442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.742465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.742499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.742527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.742571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.742602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.742636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.742664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.742700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.742734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.873 [2024-12-07 05:29:12.742763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.742791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.742821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.742853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.742886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.742918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.742950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.742985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.743019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.743079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.743108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.743140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.743172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.743202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.743228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.743257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.743287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.743321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.743352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.743379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.743403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.743437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.743467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.743495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.743525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.743558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.743586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.743828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.743863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.743895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.743926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.743956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.743987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.744022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.744063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.744093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.744123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.744154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.744183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.744216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.744246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.744299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.744330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.744363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.744392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.744422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.744452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.744480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.744511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.744540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.744569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.744593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.744617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.744646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.744673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.744702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.744731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.744759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.744783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.744810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.744842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.744868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.744891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.744918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.744955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.744986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.745026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.745063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.745095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.745125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.745157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.745192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.745222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.745256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.745289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.745321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.745356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.745391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.745419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.745448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.745472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.745496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.745527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.745556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.745580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.874 [2024-12-07 05:29:12.745605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.745631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.745671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.745703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.745734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.745979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.746025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.746055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.746079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.746107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.746136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.746168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.746192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.746221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.746250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.746282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.746321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.746348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.746378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.746408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.746439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.746473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.746507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.746552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.746583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.746647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.746678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.746715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.746745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.746774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.746806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.746836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.746865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.746892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.746924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.746960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.746985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.747024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.747063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.747094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.747126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.747154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.747178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.747213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.747244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.747270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.747297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.747326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.747351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.747378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.747409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.747445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.747475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.747508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.747539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.747583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.747614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.747650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.747684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.747714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.747743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.747771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.747802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.747832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.747863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.747892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.747927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.747958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.747990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.748227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.748262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.748289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.748312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.748342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.748380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.748406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.748436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.748462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.748486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.748513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.748547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.748580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.748616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.748645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.748691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.748721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.748753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.748787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.748821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.748852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.748892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.748926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.748957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.748989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.749034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.749073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.875 [2024-12-07 05:29:12.749102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.749134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.749165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.749196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.749228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.749255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.749285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.749317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.749344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.749371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.749395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.749418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.749448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.749482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.749517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.749549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.749577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.749600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.749630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.749654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.749677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.749705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.749742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.749770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.749825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.749855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.749892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.749921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.749949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.749975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.750006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.750045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.750076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.750114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.750142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.750176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.750562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.750600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.750628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.750659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.750691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.750719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.750749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.750778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.750810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.750843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.750873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.750901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.750936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.750968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.750999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.751033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.751066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.751096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.751132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.751159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.751189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.751222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.751254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.751287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.751321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.751371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.751401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.751431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.751461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.751500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.751527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.751556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.751588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.751619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.751653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.751682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.751708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.751731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.751758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.876 [2024-12-07 05:29:12.751789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.751823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.751853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.751884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.751913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.751943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.751977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.752016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.752040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.752068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.752097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.752127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.752152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.752178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.752208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.752233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.752259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.752294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.752324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.752354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.752383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.752416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.752450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.752482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.752514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.752751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.752781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.752813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.752843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.752874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.752909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.752940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.752972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.753003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.753042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.753076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.753101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.753126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.753155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.753185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.753216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.753250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.753277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.753301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.753327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.753361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.753391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.753423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.753456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.753488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.753519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.753547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.753579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.753605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.753639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.753668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.753702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.753732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.753758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.753785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.753813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.753874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.753904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.753936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.753966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.753994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.754030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.754067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.754097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.754134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.754166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.754203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.754238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.754263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.754287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.754316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.754349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.754379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.754406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.754434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.754458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.754487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.754514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.754544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.754569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.754598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.754628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.754657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.754898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.754928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.754959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.754986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.755026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.755055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.755084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.877 [2024-12-07 05:29:12.755113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.755140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.755172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.755231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.755257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.755294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.755325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.755361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.755395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.755446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.755475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.755511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.755541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.755573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.755601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.755633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.755663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.755691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.755725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.755752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.755781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.755810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.755833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.755857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.755881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.755912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.755938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.755966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.755995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.756029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.756060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.756089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.756120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.756151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.756174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.756204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.756235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.756273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.756296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.756323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.756371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.756399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.756433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.756463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.756498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.756548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.756584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.756628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.756663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.756700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.756729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.756763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.756790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.756824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.756851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.756879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.756903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.757155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.757194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.757222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.757252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.757283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.878 [2024-12-07 05:29:12.757314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.757350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.757378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.757413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.757443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.757471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.757502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.757536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.757569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.757600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.757629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.757664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.757696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.757727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.757761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.757788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.757820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.757854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.757888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.757912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.757936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.757963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.757992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.758027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.758056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.758081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.758106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.758138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.758163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.758200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.758233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.758262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.758290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.758322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.758351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.758381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.758427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.758455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.758491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.758522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.758551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.758582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.758615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.758649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.758678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.758701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.758726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.758758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.758782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.758806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.758849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.758877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.758906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.758934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.758965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.758993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.759027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.759064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.759322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.759355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.759383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.759411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.759443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.759467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.759492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.759525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.759553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.759581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.759610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.759641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.759667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.759699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.759733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.759761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.759792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.759823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.759860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.759889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.759924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.759958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.759990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.879 [2024-12-07 05:29:12.760025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.760064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.760093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.760127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.760158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.760187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.760217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.760244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.760276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.760306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.760334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.760362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.760394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.760425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.760456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.760480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.760504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.760530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.760559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.760588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.760614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.760640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.760671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.760704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.760731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.760780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.760816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.760852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.760883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.760918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.760948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.761005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.761047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.761081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.761148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.761179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.761208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.761237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.761266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.761300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.761329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.761573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.761602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.761630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.761656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.761684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.761710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.761736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.761766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.761796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.761837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.761866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.761895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.761928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.761958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.761996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.762033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.762062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.762091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.762122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.762153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.762182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.762212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.762239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.762271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.762305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.762337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.762370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.762396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.762430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.762464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.762496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.762525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.762562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.762590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.762623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.762657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.762691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.762726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.762759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.762809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.762839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.762868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.762899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.762934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.762963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.762995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.763023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.763051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.763084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.763111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.763140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.763173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.763208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.763243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.763273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.763305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.763334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.880 [2024-12-07 05:29:12.763357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.763385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.763417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.763444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.763468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.763500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.763747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.763777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.763806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.763867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.763895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.763928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.763961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.763993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.764026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.764056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.764083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.764113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.764143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.764171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.764203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.764234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.764279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.764307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.764336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.764367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.764404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.764433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.764462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.764497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.764530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.764553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.764581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.764612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.764649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.764681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.764713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.764744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.764774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.764808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.764838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.764867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.764890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.764914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.764938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.764970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.765006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.765044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.765083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.765115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.765144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.765172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.765201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.765230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.765254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.765279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.765310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.765339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.765373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.765399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.765423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.765447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.765471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.765500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.765533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.765563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.765592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.765623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.765653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.765682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.765919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.765948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.765982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.766009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.766043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.766074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.766104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.766137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.766173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.766202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.881 [2024-12-07 05:29:12.766234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:09.882 [2024-12-07 05:29:12.766277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.766308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.766340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.766376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.766404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.766443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.766471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.766506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.766538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.766571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.766597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.766628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.766664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.766695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.766725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.766753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.766796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.766827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.766850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.766881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.766912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.766939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.766968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.767001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.767030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.767062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.767091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.767121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.767155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.767186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.767212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.767235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.767260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.767283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.767306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.767339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.767366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.767395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.767428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.767457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.767483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.767511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.767541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.767569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.767631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.767666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.767696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.767728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.767786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.767819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.767847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.767880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.768125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.768156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.768186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.768213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.768236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.768260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.768285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.768311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.768339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.768368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.768399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.768430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.768458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.768488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.768517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.768547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.768596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.768625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.768655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.768682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.768716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.768748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.768775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.768808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.768843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.768882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.768912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.768953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.768982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.769027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.769064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.882 [2024-12-07 05:29:12.769095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.769122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.769153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.769181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.769214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.769244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.769271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.769302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.769330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.769361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.769402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.769434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.769468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.769495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.769518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.769544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.769578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.769611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.769642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.769673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.769697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.769724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.769748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.769774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.769803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.769827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.769854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.769884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.769916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.769948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.769985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.770021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.770051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.770304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.770335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.770363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.770394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.770427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.770479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.770515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.770571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.770602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.770636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.770666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.770696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.770728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.770758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.770789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.770813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.770837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.770866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.770893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.770922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.770949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.770980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.771015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.771045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.771077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.771105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.771133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.771157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.771189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.771217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.771240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.771269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.771299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.771328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.771355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.771381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.771407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.771438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.771492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.771521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.771549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.771579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.771611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.771641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.771679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.771707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.771738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.771775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.771806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.771836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.771873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.771898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.771921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.771947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.883 [2024-12-07 05:29:12.771971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.772002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.772035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.772073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.772099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.772133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.772167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.772198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.772229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.772493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.772530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.772559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.772596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.772629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.772668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.772699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.772732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.772767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.772802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.772831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.772858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.772891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.772923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.772958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.772992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.773036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.773067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.773095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.773132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.773164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.773193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.773228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.773264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.773294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.773318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.773341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.773372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.773404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.773438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.773470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.773503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.773539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.773570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.773601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.773635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.773662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.773685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.773710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.773741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.773769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.773799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.773828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.773856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.773883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.773913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.773943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.773969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.773997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.774032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.774064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.774093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.774132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.774162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.774188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.774221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.774255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.774287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.774319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.774347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.774374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.774405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.774435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.774468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.774725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.774760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.774787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.774813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.774842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.774875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.774903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.774930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.774953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.774991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.775024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.775056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.775092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.775120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.775148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.775173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.884 [2024-12-07 05:29:12.775196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.775219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.775242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.775270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.775300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.775331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.775360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.775394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.775426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.775491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.775521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.775555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.775587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.775612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.775641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.775670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.775697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.775724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.775755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.775792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.775816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.775839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.775870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.775905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.775934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.775962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.775992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.776022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.776045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.776069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.776094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.776124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.776154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.776184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.776207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.776230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.776253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.776276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.776299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.776322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.776388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.776425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.776476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.776503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.776533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.776561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.776592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.776846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.776880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.776912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.776948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.776977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.777973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.778003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.778038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.778071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.778099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.778129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.778158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.778189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.778223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.778266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.778297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.885 [2024-12-07 05:29:12.778333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.778363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.778394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.778426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.778453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.778484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.778514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.778548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.778574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.778619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.778646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.778681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.778712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.778962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.778990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.779019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.779055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.779084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.779107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.779134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.779163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.779192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.779224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.779256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.779285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.779316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.779342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.779379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.779411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.779439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.779476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.779511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.779540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.779576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.779606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.779671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.779704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.779741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.779773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.779802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.779831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.779859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.779887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.779920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.779983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.780016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.780049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.780081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.780114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.780142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.780178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.780208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.780239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.780267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.780296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.780329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.780361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.780397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.780428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.780461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.780492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.780524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.780555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.780579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.780610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.780642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.780669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.780697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.780729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.780752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.780776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.780807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.780842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.780871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.780903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.780931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.781186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.781217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.781249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.781276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.781305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.781336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.781372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.781401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.781429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.781462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.781497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.781531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.781566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.781594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.781627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.781659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.781692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.781727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.781793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.781828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.781881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.886 [2024-12-07 05:29:12.781906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.781930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.781955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.781992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.782983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.783007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.783324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.783354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.783383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.783416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.783445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.783479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.783509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.783540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.783570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.783605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.783637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.783671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.783702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.783732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.783765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.783796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.783825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.783856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.783886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.783916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.783954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.783986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.784023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.784054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.784089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.784120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.784156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.784190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.784226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.784257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.784292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.784325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.784384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.784415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.784477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.784512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.784551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.784579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.784609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.784638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.784668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.784696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.784727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.784752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.784786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.784815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.784845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.784877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.784901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.784930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.784958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.784989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.785034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.785064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.785096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.887 [2024-12-07 05:29:12.785130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.785163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.785191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.785218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.785253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.785283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.785315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.785347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.785602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.785631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.785665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.785699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.785730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.785765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.785797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.785831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.785863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.785895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.785932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.785996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.786030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.786088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.786120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.786162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.786194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.786242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.786276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.786312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.786340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.786373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.786404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.786435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.786467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.786499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.786527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.786559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.786589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.786618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.786653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.786690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.786721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.786753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.786787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.786817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.786848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.786872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.786906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.786936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.786970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.787001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.787040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.787070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.787105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.787147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.787171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.787195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.787218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.787243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.787270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.787294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.787321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.787354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.787382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.787412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.787444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.787474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.787504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.787531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.787555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.787584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.787612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.787642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.787853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.787879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.787904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.787933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.787966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.787996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.788031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.788063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.788095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.788127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.788151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.788176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.788200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.788224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.888 [2024-12-07 05:29:12.788249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.788272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.788295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.788318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.788342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.788365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.788389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.788412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.788436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.788461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.788485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.788509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.788533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.788557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.788581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.788605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.788629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.788652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.788676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.788700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.788724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.788748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.788771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.788794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.788817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.788840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.788864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.788889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.788913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.788942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.788976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.789005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.789047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.789080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.789112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.789142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.789181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.789210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.789237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.789266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.789299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.789332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.789358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.789386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.789418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.789450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.789484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.789515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.789541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.789748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.789775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.789799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.789822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.789848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.789872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.789896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.789919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.789942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.789966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.789990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.790019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.790043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.790068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.790094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.790117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.790154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.790183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.790214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.790246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.790283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.790315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.790347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.790377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.790407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.790437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.790469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.790505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.790536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.790571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.790601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.790634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.790665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.790700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.790733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.790763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.790793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.790852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.790884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.790927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.790957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.790991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.791024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.791063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.791091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.791121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.791151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.791180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.791212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.889 [2024-12-07 05:29:12.791243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.791274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.791310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.791336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.791360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.791394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.791424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.791449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.791482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.791508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.791540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.791572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.791600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.791634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.791662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.791894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.791926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.791958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.791985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.792018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.792048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.792112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.792144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.792171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.792202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.792238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.792268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.792302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.792331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.792365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.792395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.792428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.792460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.792487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.792521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.792552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.792584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.792612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.792646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.792681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.792712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.792746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.792777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.792805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.792838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.792871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.792908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.792939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.792967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.792999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.793037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.793066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.793093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.793125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.793154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.793187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.793221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.793249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.793274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.793299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.793328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.793360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.793393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.793429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.793461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.793494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.793529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.793557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.793587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.793611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.793637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.793669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.793693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.793717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.793744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.793774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.793807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.793834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.794090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.794124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.794157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.794193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.794218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.794242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.794269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.794299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.794326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.794350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.794374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.794399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.794421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.794444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.794470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.794494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.794518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.794542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.794568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.890 [2024-12-07 05:29:12.794594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.794623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.794651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.794675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.794699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.794723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.794746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.794770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.794793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.794817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.794840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.794864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.794891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.794921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.794953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.794981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.795016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.795044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.795076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.795104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.795128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.795152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.795175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.795198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.795223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.795246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.795270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.795294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.795317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.795341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.795365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.795388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.795413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.795437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.795461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.795485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.795509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.795534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.795558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.795581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.795605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.795629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.795653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.795677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.795700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.796654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.796684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.796719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.796749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.796783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.796814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.796866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.796898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.796936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.796963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.796995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.797037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.797073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.797108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.797137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.797172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.797202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.797234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.797263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.797297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.797326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.797353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.797382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.797411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.797445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.797473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.797499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.797534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.797564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.797596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.797627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.797686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.797717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.797756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.797784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.797814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.797843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.797877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.797906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.797938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.798027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.798057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.798083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.798112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.798144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.798173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.798206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.798238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.798262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.798285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.798311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.798336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.891 [2024-12-07 05:29:12.798374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.798403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.798437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.798470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.798505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.798534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.798568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.798597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.798625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.798655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.798689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.798718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.798750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.798777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.798809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.798838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.798870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.798897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.798932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.798964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.798996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.799033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.799064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.799093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.799128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.799160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.799190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.799224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.799251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.799288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.799321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.799355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.799386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.799425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.799457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.799488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.799521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.799559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.799591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.799626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.799654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.799686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.799714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.799745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.799776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.799805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.799829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.799855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.799885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.799915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.799947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.799979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.800067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.800098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.800131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.800159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.800187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.800218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.800252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.800284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.800310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.800342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.800367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.800391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.800414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.800448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.800480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.800516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.800542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.800567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.800591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.800615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.800639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.800663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:09.892 [2024-12-07 05:29:12.800686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.800955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.800981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.801005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.801039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.801064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.801088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.801112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.801136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.801166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.801195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.801225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.801257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.801286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.801317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.801343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.801366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.801398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.801433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.801463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.801498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.801528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.801564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.892 [2024-12-07 05:29:12.801592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.801622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.801651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.801686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.801716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.801752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.801785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.801826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.801854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.801887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.801919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.801949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.801984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.802022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.802051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.802085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.802127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.802159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.802294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.802325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.802351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.802379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.802409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.802442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.802471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.802518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.802547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.802574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.802607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.802637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.802667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.802700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.802724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.802748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.802774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.802798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.802821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.802845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.802868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.802892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.802916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.802941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.802965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.802989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.803018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.803042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.803066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.803090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.803115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.803138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.803162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.803185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.803209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.803234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.803257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.803281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.803304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.803328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.803353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.803376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.803408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.803440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.803469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.803497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.803531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.803561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.803597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.803632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.803663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.803696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.803751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.803783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.803812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.803843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.803877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.803910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.803947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.803978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.804007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.804041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.804072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.804104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.804292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.804351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.804383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.804431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.804464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.893 [2024-12-07 05:29:12.804524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.804554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.804590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.804623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.804652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.804683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.804720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.804754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.804782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.804813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.804842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.804873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.804905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.804936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.804967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.804994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.805040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.805069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.805280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.805310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.805337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.805375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.805404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.805436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.805463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.805496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.805529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.805563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.805587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.805613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.805637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.805667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.805698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.805733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.805766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.805794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.805823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.805851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.805884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.805924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.805957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.805992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.806025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.806060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.806092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.806126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.806153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.806181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.806213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.806247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.806281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.806307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.806331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.806362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.806394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.806419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.806455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.806486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.806623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.806651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.806685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.806720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.806750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.806779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.806808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.806838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.806872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.806905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.806939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.806971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.807003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.807038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.807072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.807105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.807135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.807163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.807200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.807232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.807267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.807295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.807326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.807361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.807392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.807421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.807453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.807479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.807511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.807544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.807575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.807599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.807626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.807654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.807681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.807706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.807730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.807761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.807792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.807815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.894 [2024-12-07 05:29:12.807842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.807873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.807906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.807938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.807976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.808008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.808043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.808073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.808108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.808140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.808169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.808201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.808233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.808264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.808301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.808331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.808365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.808395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.808435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.808467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.808529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.808560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.808590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.808624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.808774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.808804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.808833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.808865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.808893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.808927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.808962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.808995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.809030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.809057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.809082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.809106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.809138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.809168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.809199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.809234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.809264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.809287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.809311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.809335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.809358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.809384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.809409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.809663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.809688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.809712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.809735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.809759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.809783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.809807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.809831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.809855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.809879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.809903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.809926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.809950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.809974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.809998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.810027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.810059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.810087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.810118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.810146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.810176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.810202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.810240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.810271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.810305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.810333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.810365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.810395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.810422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.810454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.810483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.810521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.810554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.810584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.810611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.810644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.810680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.810709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.810745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.810769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.810890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.810916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.810940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.810963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.810988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.811016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.811041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.811066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.811090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.811114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.811138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.895 [2024-12-07 05:29:12.811160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.811184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.811207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.811232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.811256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.811280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.811304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.811328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.811352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.811375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.811406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.811434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.811470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.811499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.811531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.811562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.811591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.811621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.811652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.811683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.811721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.811754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.811790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.811818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.811855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.811886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.811939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.811971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.812015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.812045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.812094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.812125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.812160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.812188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.812235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.812263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.812299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.812327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.812361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.812393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.812426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.812456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.812487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.812519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.812551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.812588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.812619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.812652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.812686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.812721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.812762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.812796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.812846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.813037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.813077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.813107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.813135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.813168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.813196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.813226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.813257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.813287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.813316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.813349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.813378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.813409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.813449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.813480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.813512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.813541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.813565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.813591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.813615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.813648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.813683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.813714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.813747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.813778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.813807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.813839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.813869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.813900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.813932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.813958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.813988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.814024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.814058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.814117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.814150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.814213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.814245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.814277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.814311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.814350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.814382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.814450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.814480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.814519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.896 [2024-12-07 05:29:12.814546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.814577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.814604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.814632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.814672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.814702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.814735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.814769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.814803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.814830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.814866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.814898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.814929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.814967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.815000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.815035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.815068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.815103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.815342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.815377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.815409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.815438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.815470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.815497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.815524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.815553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.815577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.815614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.815643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.815674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.815704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.815727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.815751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.815777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.815801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.815824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.815848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.815872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.815906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.815939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.815964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.815988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.816017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.816041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.816066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.816089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.816115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.816138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.816161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.816184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.816208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.816232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.816255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.816279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.816303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.816326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.816349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.816374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.816404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.816437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.816466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.816498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.816528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.816557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.816587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.816637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.816667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.816702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.816731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.816764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.816796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.816823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.816853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.816884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.816917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.816946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.816979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.817006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.817048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.817079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.817117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.817147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.817392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.817424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.817456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.817487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.817521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.897 [2024-12-07 05:29:12.817552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.817586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.817622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.817647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.817671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.817695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.817718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.817742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.817766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.817790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.817814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.817838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.817861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.817885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.817909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.817933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.817956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.817980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.818004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.818034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.818058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.818082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.818106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.818130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.818155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.818179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.818203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.818234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.818268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.818298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.818327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.818383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.818416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.818477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.818511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.818547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.818575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.818605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.818634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.818664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.818696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.818725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.818757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.818785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.818819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.818853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.818884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.818915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.818952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.818983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.819025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.819059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.819093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.819128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.819163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.819198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.819227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.819259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.819493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.819528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.819554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.819585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.819617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.819651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.819684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.819715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.819743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.819770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.819800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.819835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.819869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.819902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.819931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.819957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.819989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.820026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.820060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.820092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.820132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.820161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.820191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.820223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.820263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.820295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.820332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.820364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.820402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.820434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.820494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.820531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.820567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.820596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.820632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.820663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.820696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.820730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.820788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.820817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.898 [2024-12-07 05:29:12.820852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.820882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.820910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.820938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.820971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.821001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.821039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.821070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.821101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.821130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.821167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.821191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.821217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.821247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.821278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.821305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.821332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.821364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.821395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.821423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.821446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.821473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.821499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.821524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.821765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.821803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.821833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.821865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.821897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.821927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.821956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.821985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.822019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.822051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.822082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.822113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.822140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.822175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.822205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.822234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.822258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.822285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.822316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.822350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.822384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.822412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.822444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.822473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.822499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.822527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.822551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.822584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.822617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.822641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.822671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.822704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.822738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.822769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.822799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.822822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.822846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.822869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.822900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.822927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.822952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.822975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.822999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.823027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.823052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.823076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.823100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.823123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.823147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.823171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.823195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.823219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.823242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.823266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.823290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.823314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.823337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.823361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.823384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.823407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.823431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.823455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.823487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.823753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.823794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.823824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.823881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.823915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.823952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.823982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.824022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.824057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.824092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.824124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.899 [2024-12-07 05:29:12.824159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.824192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.824224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.824264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.824293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.824322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.824357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.824387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.824420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.824451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.824481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.824525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.824551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.824597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.824628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.824661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.824692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.824726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.824757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.824813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.824844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.824876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.824904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.824933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.824960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.824995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.825028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.825055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.825083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.825114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.825140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.825173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.825208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.825241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.825274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.825306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.825331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.825355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.825384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.825409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.825438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.825471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.825502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.825530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.825558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.825585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.825616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.825652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.825676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.825700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.825724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.825747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.825780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.826033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.826066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.826097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.826131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.826163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.826195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.826225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.826254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.826287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.826319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.826350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.826385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.826413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.826447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.826482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.826511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.826540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.826571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.826602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.826636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.826694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.826722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.826751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.826782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.826810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.826843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.826874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.826910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.826940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.826972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.827003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.827038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.827066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.827097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.827125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.827153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.827183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.827219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.827248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.827271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.827294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.827324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.827355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.827383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.827412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.900 [2024-12-07 05:29:12.827444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.827474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.827511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.827538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.827573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.827601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.827623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.827655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.827691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.827720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.827749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.827778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.827801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.827824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.827851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.827879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.827907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.827933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.828227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.828260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.828290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.828318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.828343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.828366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.828393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.828426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.828461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.828488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.828511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.828534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.828565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.828602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.828632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.828669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.828698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.828730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.828757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.828789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.828828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.828857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.828886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.828922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.828959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.828990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:12.829956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:09.901 05:29:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:09.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:09.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:09.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:09.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:09.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:09.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:09.901 [2024-12-07 05:29:13.006319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:13.006359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:13.006388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:13.006426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:13.006454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:13.006489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:13.006517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:13.006550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:13.006577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:13.006611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.901 [2024-12-07 05:29:13.006642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.006672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.006700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.006727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.006756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.006798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.006826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.007427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.007458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.007497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.007524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.007551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.007579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.007613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.007645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.007676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.007705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.007728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.007760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.007795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.007835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.007865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.007898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.007929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.007963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.007995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.008033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.008062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.008086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.008111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.008143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.008166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.008189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.008217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.008248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.008273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.008301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.008338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.008368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.008397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.008428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.008459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.008491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.008522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.008550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.008572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.008595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.008620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.008646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.008676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.008713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.008743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.008771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.008805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.008839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.008868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.008901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.008934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.008964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.008996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.009035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.009073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.009102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.009137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.009167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.009198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.009236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.009266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.009327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.009357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.009385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.009480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.009515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.009546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.009573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.009604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.009632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.009666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.009696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.009727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.009758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.009787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.902 [2024-12-07 05:29:13.009815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.009849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.009885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.009911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.009936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.009959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.009984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.010021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.010058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.010084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.010117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.010146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.010176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.010204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.010232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.010263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.010295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.010319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.010350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.010380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.010412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.010443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.010467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.010499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.010528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.010551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.010576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.010631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.010657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.010685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.010718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.010747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.010800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.010831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.010861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.010888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.010917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.010950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.010980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.011021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.011051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.011084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.011119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.011151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.011179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.011205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.011239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.011270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.011321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.011349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.011376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.011406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.011637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.011662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.011688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.011714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.011736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.011767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.011801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.011835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.011868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.011898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.011929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.011960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.011987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.012020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.012047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.012103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.012133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.012168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.012196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.012228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.012257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.012284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.012321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.012349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.012378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.012405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.012438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.012469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.012498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.012532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.012564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.012601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.012625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.012654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.012689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.012718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.012745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.012773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.012803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.012832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.012862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.012891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.012915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.012940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.012964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.013000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.903 [2024-12-07 05:29:13.013041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.013075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.013106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.013134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.013163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.013200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.013231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.013258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.013290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.013320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.013351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.013384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.013412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.013442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.013472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.013497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.013520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.013548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.013790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.013822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.013858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.013891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.013920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.013946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.013978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.014974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.015008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.015041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.015075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.015106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.015135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.015161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.015188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.015216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.015247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.015278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.015309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.015342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.015376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.015403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.015435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.015694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.015730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.015757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.015790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.015822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.015862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.015892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.015922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.015949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.015976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.015999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.016035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.016071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.016103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.016135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.016166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.016201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.904 [2024-12-07 05:29:13.016254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.016285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.016320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.016350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.016382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.016413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.016448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.016484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.016513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.016545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.016574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.016609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.016645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.016673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.016707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.016737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.016773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.016802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.016833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.016863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.016895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.016926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.016951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.016980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.017008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.017049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.017080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.017132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.017161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.017188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.017222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.017249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.017286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.017315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.017352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.017381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.017409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.017438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.017466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.017521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.017551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.017583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.017611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.017645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.017672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.017702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.017733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.017967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.017999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.018030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.018057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.018090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.018120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.018150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.018179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.018201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.018225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.018248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.018271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.018294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.018323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.018349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.018380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.018413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.018444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.018482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.018510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.018542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.018600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.018630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.018661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.018693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.018724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.018752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.018784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.018815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.018845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.018882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.018911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.018939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.018963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.018991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.019022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.019048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.019075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.019103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.019133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.019165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.019197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.019228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.019250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.019273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.019296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.019324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.019356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.019382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.019412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.905 [2024-12-07 05:29:13.019442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.019469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.019492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.019515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.019537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.019560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.019582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.019605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.019628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.019651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.019673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.019695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.019719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.019931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.019955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.019977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.020000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.020026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.020048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.020071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.020094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.020117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.020140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.020163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.020185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.020208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.020231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.020265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.020296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.020323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.020353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.020377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.020407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.020435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.020464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.020523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.020551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.020582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.020613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.020643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.020671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.020698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.020731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.020760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.020793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.020822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.020853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.020888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.020919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.020950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.020980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.021022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.021054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.021087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.021113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.021175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.021204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.021231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.021263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.021294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.021326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.021357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.021389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.021421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.021458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.021489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.021521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.021549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.021579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.021611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.021640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.021673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.021702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.021732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.021762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.021793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.021823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.022218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.022249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.022281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.022314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.022351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.022380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.022408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.022431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.906 [2024-12-07 05:29:13.022456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.022485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.022516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.022544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.022572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.022622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.022652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.022686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.022712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.022743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.022777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.022806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.022862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.022893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.022924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.022954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.022983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.023017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.023044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.023078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.023111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.023144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.023173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.023200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.023230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.023260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.023300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.023330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.023358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.023391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.023420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.023454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.023487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.023515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.023546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.023575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.023605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.023633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.023663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.023696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.023729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.023780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.023809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.023842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.023872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.023902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.023932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.023962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.023989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.024991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.025021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.025050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.025073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.025096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.025121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.025145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.025176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.025206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.025232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.025258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.025281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.025303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.907 [2024-12-07 05:29:13.025326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.025349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.025372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.025395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.025422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.025448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.025479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.025503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.025534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.025564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.025589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.025612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.025635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.025658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.025681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.025703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.025726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.025749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.025772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.025795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.025818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.026030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.026058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.026081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.026104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.026129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.026152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.026175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.026197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.026220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.026242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.026264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.026287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.026310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.026332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.026355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.026378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.026401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.027371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.027401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.027441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.027472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.027519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.027549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.027578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.027605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.027635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.027666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.027691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.027736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.027764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.027794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.027827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.027855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.027899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.027927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.027957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.027990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.028024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.028055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.028085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.028115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.028144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.028174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.028205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.028237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.028270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.028306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.028341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.028372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.028402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.028428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.028459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.028491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.028525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.028556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.028578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.028608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.028638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.028668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.028697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.028726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.028756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.028782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.028848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.028874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.028905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.028935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.028964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.028994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.029026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.029057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.029089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.029121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.029150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.029182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.029218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.029280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.908 [2024-12-07 05:29:13.029307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.029339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.029369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.029398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.029431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.029458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.029493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.029522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.029556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.029585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.029623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.029653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.029686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.029725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.029752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.029779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.029814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.029844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.029887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.029914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.029941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.029974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.030002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.030040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.030069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.030104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.030134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.030166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.030194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.030262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.030290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.030324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.030354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.030381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.030410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.030441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.030472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.030506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.030536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.030564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.030595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.030627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.030663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.030694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.030717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.030751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.030784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.030813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.030845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.030872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.030945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.030973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.031004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.031031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.031059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.031090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.031113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.031140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.031168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.031191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.031214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.031237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.031260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.031283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.031306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.031328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.031351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.031530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.031555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.031578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.031601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.031627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.031655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.031680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.031712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.031741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.031764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.031786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.031808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.031831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.031854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.031876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.031899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.031921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.031944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.031968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.031992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.032021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.032050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.032081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.032111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.032143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.032168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.032191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.032214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.032237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.032260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.909 [2024-12-07 05:29:13.032282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.032305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.032328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.032352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.032375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.032397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.032420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.032444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.032467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.032490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.032513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.032536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.032558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.032581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.032604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.032627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.033621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.033654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.033684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.033710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.033742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.033777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.033813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.033843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.033879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.033910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.033941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.033973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.034007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.034045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.034078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.034106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.034136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.034167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.034195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.034224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.034250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.034280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.034311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.034335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.034358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.034386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.034416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.034439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.034472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.034507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.034537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.034567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.034593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.034625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.034687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.034720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.034746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.034780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.034810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.034840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.034870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.034902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.034938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.034964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.034993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.035024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.035057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.035090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.035121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.035152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.035177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.035200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.035223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.035246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.035274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.035302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.035333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.035366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.035397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.035427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.035458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.035487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.035522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.910 [2024-12-07 05:29:13.035557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.035586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.035618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.035652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.035677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.035713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.035748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.035797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.035825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.035860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.035894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.035921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.035959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.035994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.036030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.036060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.036093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.036127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.036163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.036196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.036255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.036287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.036322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.036351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.036381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.036414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.036444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.036499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.036527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.036563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.036591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.036620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.036683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.036712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.036745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.036911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.036941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.036971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.037003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.037038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.037067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.037093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.037120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.037151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:09.911 [2024-12-07 05:29:13.037183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.037207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.037234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.037261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.037286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.037319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.037351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.037382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.037408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.037431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.037453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.037476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.037499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.037522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.037545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.037573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.037603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.037634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.037658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.037681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.037703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.037726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 05:29:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:14:09.911 [2024-12-07 05:29:13.037748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.037773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.037795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.037818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.037841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.037864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.911 [2024-12-07 05:29:13.037886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.037909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.037931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.037954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.037977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.038000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.038034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.038060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.038083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 05:29:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:14:09.912 [2024-12-07 05:29:13.038106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.038131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.038157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.038190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.038219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.038254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.038280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.038309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.038338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.038365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.038398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.038431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.038460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.038492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.038522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.038559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.038587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.038823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.038847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.038870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.038892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.038915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.038939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.038961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.038985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.039976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.040006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.040039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.040073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.040105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.040165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.040191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.040223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.040251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.040280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.040314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.040343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.040376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.040414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.040442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.040470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.040501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.041038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.041073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.041104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.041145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.041180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.041217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.041249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.912 [2024-12-07 05:29:13.041278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.041311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.041341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.041374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.041404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.041435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.041468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.041499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.041527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.041558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.041591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.041618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.041653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.041686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.041713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.041743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.041773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.041804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.041834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.041865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.041896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.041929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.041957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.041992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.042030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.042064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.042096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.042130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.042161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.042191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.042220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.042256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.042284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.042314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.042344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.042373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.042402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.042431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.042461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.042485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.042508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.042536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.042563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.042590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.042620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.042650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.042687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.042714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.042741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.042776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.042810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.042833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.042860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.042891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.042920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.042950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.043075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.043108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.043141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.043173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.043201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.043233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.043261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.043296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.043325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.043353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.043387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.043417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.043448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.043478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.043510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.043541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.043601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.043628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.043656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.043691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.043721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.043757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.043791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.043831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.043863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.043894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.043926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.043955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.043994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.044027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.044061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.044088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.044119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.044155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.044180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.044212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.913 [2024-12-07 05:29:13.044240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.044269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.044295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.044323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.044355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.044386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.044413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.044441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.044476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.044504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.044535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.044558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.044582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.044613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.044647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.044676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.044703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.044734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.044766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.044795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.044827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.044856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.044886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.044908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.044930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.044959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.044981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.045006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.045248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.045273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.045296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.045333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.045359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.045388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.045416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.045449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.045479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.045511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.045543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.045599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.045627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.045655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.045682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.045710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.045766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.045798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.045829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.045854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.045883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.045909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.045941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.045967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.045997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.046989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.047025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.047262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.047307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.914 [2024-12-07 05:29:13.047334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.047373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.047402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.047432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.047467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.047498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.047529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.047561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.047590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.047634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.047663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.047695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.047723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.047753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.047784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.047816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.047844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.047876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.047912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.047943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.047969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.047996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.048033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.048081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.048109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.048140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.048168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.048199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.048261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.048290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.048315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.048350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.048558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.048582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.048612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.048640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.048666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.048695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.048725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.048756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.048786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.048813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.048843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.048871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.048894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.048922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.048955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.048986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.049013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.049036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.049061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.049084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.049106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.049129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.049167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.049196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.049225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.049253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.049285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.049320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.049350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.049406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.049561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.049593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.049626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.049657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.049688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.049715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.049746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.049776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.049807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.049846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.049876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.049916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.049946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.049972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.050003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.050055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.050084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.050115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.050144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.050175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.050223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.050254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.050286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.050314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.050344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.050372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.050402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.050436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.050465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.050494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.050520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.050548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.050577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.050609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.050639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.915 [2024-12-07 05:29:13.050672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.050699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.050727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.050756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.050790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.050813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.050850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.050879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.050908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.050940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.050974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.051002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.051035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.051069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.051104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.051128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.051150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.051173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.051198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.051227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.051256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.051285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.051314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.051349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.051381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.051416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.051449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.051477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.051655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.051688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.051721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.051750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.051780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.051813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.051843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.051872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.051906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.051940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.051973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.052987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.053014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.053038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.916 [2024-12-07 05:29:13.053062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.053085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.053107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.053129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.053154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.053176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.053199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.053222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.053252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.053279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.053314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.053343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.053581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.053613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.053641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.053669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.053701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.053730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.053760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.053788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.053846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.053875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.053912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.053942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.053971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.054002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.054041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.054076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.054106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.054136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.054165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.054198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.054233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.054260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.054288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.054317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.054340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.054369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.054395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.054422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.054450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.054479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.054508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.054540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.054570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.054629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.054661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.054695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.054726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.054757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.054784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.054813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.054841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.054874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.054909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.054941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.054982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.055016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.055047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.055080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.055112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.055140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.055175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.055207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.055234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.055265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.055297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.055327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.055356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.055392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.055424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.055469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.055500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.055527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.055563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.055814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.055847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.055880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.055911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.055941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.055970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.056000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.056033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.056063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.056094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.056121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.056145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.056175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.056203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.056227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.056258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.056288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.056317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.056345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.056371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.056403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.056437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.056460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.917 [2024-12-07 05:29:13.056484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.056511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.056539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.056573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.056605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.056636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.056663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.056698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.056727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.056762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.056792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.056828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.056860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.056899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.056929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.056964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.056993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.057025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.057056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.057088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.057121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.057150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.057180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.057205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.057238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.057268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.057300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.057324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.057347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.057382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.057409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.057441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.057475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.057499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.057526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.057549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.057573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.057596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.057622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.057646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.057674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.057879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.057904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.057931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.057966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.057994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.058986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.059026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.059065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.059096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.059127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.059155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.918 [2024-12-07 05:29:13.059184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.059214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.059242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.059279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.059311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.059341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.059371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.059403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.059433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.059465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.059502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.059540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.059574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.059814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.059848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.059874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.059900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.059931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.059954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.059985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.060009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.060038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.060062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.060084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.060108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.060132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.060161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.060191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.060220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.060252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.060280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.060313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.060343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.060374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.060405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.060444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.060476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.060514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.060544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.060579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.060608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.060643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.060674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.060704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.060733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.060764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.060792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.060822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.060854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.060884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.060941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.060971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.061015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.061042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.061071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.061105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.061137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.061166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.061195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.061229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.061264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.061298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.061329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.061354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.061379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.061409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.061435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.061459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.061483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.061508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.061541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.061568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.061598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.061629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.061659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.061692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.061724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.061989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.062035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.062069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.062098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.062124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.062155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.062187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.062242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.062274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.062317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.062346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.062377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.062404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.062440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.062471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.062498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.062532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.062561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.062593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.062623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.062664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.919 [2024-12-07 05:29:13.062692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.062724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.062752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.062784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.062813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.062836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.062867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.062895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.062925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.062958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.062994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.063028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.063064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.063095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.063129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.063166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.063191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.063219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.063248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.063271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.063298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.063321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.063349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.063378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.063406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.063431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.063454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.063477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.063501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.063525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.063552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.063582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.063613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.063642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.063673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.063707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.063773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.063807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.063866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.063897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.063934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.063964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.064548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.064584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.064612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.064645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.064676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.064738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.064766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.064802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.064837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.064865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.064897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.064930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.064965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.064995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.065034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.065063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.065087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.065118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.065151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.065175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.065211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.065245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.065290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.065322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.065355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.065383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.065412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.065440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.065472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.065505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.065532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.065562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.065592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.065638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.065664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.920 [2024-12-07 05:29:13.065692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.065721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.065749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.065779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.065810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.065845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.065877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.065931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.065960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.065993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.066034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.066065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.066100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.066129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.066161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.066193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.066229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.066263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.066296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.066327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.066355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.066389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.066421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.066451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.066484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.066518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.066545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.066576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.066606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.066693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.066729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.066758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.066785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.066815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.066841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.066872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.066908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.066939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.066969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.066997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.067032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.067059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.067089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.067115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.067139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.067162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.067192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.067224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.067250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.067276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.067303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.067335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.067365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.067398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.067429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.067460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.067492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.067523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.067553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.067587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.067618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.067651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.067685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.921 [2024-12-07 05:29:13.067726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.067756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.067788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.067822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.067857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.067890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.067919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.067947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.067981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.068019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.068045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.068075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.068105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.068132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.068158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.068182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.068217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.068242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.068272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.068297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.068321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.068347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.068372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.068421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.068454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.068486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.068518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.068547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.068578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.068818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.068851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.068880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.068914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.068942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.068968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.068997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:09.922 [2024-12-07 05:29:13.069362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.069986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.070030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.070064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.070103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.070136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.070163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.070195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.070224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.070251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.070280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.070309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.070344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.070375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.070405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.070435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.070459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.070482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.922 [2024-12-07 05:29:13.070505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.070530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.071272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.071304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.071337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.071367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.071394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.071427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.071456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.071487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.071518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.071547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.071578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.071612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.071639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.071670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.071700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.071727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.071759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.071794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.071826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.071851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.071907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.071938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.071989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.072021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.072052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.072085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.072119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.072147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.072175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.072204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.072237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.072273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.072299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.072328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.072358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.072394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.072424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.072448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.072474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.072504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.072536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.072566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.072593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.072625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.072656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.072691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.072722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.072752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.072777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.072808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.072834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.072861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.072884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.072916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.072945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.072980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.073024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.073054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.073099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.073127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.073166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.073199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.073233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.073352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.073380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.073415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.073449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.073486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.073512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.073542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.073576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.073606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.073637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.073669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.073703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.073733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.073764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.073796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.073828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.073857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.073888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.073918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.073953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.073985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.074029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.074065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.074101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.074129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.074160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.074195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.074225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.074255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.074281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.074310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.074341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.923 [2024-12-07 05:29:13.074373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.074406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.074430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.074454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.074487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.074522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.074550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.074582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.074620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.074647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.074672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.074704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.074731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.074755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.074779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.074806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.074834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.074871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.074897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.074922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.074945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.074969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.074995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.075032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.075060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.075084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.075109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.075133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.075156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.075179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.075211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.075255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.075497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.075529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.075560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.075590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.075620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.075654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.075683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.075708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.075732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.075758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.075788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.075819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.075849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.075883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.075912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.075941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.075969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.076001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.076034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.076063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.076095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.076129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.076157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.076190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.076221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.076245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.076272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.076299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.076334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.076363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.076393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.076418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.076441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.076464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.076491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.076518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.076542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.076572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.076596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.076619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.076642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.076665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:09.924 [2024-12-07 05:29:13.076688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.210 [2024-12-07 05:29:13.076711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.210 [2024-12-07 05:29:13.076735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.210 [2024-12-07 05:29:13.076762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.210 [2024-12-07 05:29:13.076785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.210 [2024-12-07 05:29:13.076809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.076832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.076856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.076881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.076904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.076927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.076950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.076975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.076998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.077027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.077050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.077074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.077098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.077122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.077147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.077170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.077371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.077395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.077419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.077442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.077465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.077487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.077511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.077534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.077557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.077580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.077603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.077627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.077649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.077672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.077695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.077719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.077743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.077766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.077797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.077824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.077851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.077878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.077919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.077954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.077983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.078019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.078054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.078087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.078117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.078149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.078181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.078211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.078241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.078270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.078302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.078333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.078383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.078415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.078464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.078496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.078543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.078570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.078606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.078640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.078677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.078708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.078737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.078771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.078803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.078833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.078866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.078901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.078930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.078961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.078993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.079033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.079062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.079090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.079126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.079158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.079187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.079221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.079253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.079289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.079538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.079566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.079589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.079615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.079644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.079675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.079705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.079734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.079770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.079809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.079843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.079874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.079907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.079931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.211 [2024-12-07 05:29:13.079958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.079995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.080034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.080062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.080095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.080130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.080161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.080195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.080225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.080259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.080289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.080319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.080349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.080376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.080413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.080440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.080473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.080504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.080535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.080569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.080596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.080629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.080662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.080690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.080718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.080752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.080783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.080818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.080850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.080882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.080914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.080945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.080974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.081006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.081036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.081068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.081102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.081132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.081168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.081201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.081232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.081262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.081296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.081326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.081352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.081382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.081419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.081446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.081475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.081709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.081738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.081772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.081808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.081836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.081867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.081899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.081933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.081957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.081980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.212 [2024-12-07 05:29:13.082998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.083027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.083051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.083076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.083100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.083123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.083146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.083171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.083194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.083217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.083240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.083264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.083287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.083311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.083335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.083536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.083563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.083586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.083610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.083634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.083657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.083681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.083705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.083728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.083751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.083775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.083798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.083822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.083846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.083869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.083892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.083916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.084669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.084700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.084728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.084760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.084787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.084819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.084852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.084885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.084915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.084945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.084975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.085005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.085039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.085068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.085094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.085152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.085182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.085221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.085250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.085282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.085310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.085343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.085372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.085404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.085437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.085465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.085495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.085527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.085563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.085592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.085622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.085655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.085685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.085715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.085747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.085780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.085808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.085841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.085869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.085896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.085919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.085949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.085978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.086009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.086055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.086085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.086158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.086186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.086212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.086235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.086261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.086287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.086327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.086356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.086386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.086416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.086446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.086476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.086505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.086565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.086593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.086643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.086676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.213 [2024-12-07 05:29:13.086710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.086738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.086774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.086802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.086830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.086863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.086896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.086924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.086953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.086984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.087018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.087058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.087088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.087142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.087172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.087222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.087254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.087301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.087334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.087377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.087407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.087446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.087475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.087507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.087535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.087569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.087598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.087630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.087666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.087695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.087730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.087763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.087790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.087819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.087849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.087885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.087920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.087944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.087967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.087997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.088032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.088066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.088093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.088127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.088158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.088190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.088224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.088293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.088323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.088355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.088386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.088419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.088445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.088470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.088495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.088523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.088554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.088585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.088608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.088631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.088655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.088678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.088700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.088723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.088890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.088921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.088953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.088989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.089024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.089081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.089110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.089150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.089183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.089212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.089242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.089278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.089308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.089338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.089362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.089386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.089410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.089433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.089456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.089479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.089505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.089531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.089560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.089590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.089616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.089647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.089673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.089703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.089726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.089750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.089774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.089799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.089822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.089845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.214 [2024-12-07 05:29:13.089868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.089891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.089914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.089937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.089961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.089984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.090009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.090042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.090067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.090091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.090115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.090139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.090481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.090515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.090548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.090577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.090614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.090643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.090669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.090696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.090726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.090758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.090787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.090810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.090834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.090858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.090882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.090906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.090929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.090952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.090975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.090998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.091027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.091050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.091074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.091097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.091120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.091144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.091167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.091194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.091218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.091241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.091264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.091287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.091310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.091334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.091925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.091953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.091982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.092023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.092052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.092087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.092115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.092150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.092180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.092209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.092245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.092273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.092304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.092332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.092360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.092391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.092418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.092450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.092482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.092513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.092547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.092579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.092611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.092646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.092676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.092709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.092738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.092767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.092797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.092828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.092857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.092885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.092915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.092944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.092978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.215 [2024-12-07 05:29:13.093008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.093051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.093080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.093110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.093137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.093160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.093186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.093222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.093255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.093283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.093313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.093344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.093420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.093449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.093473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.093504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.093537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.093569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.093602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.093631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.093662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.093693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.093722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.093756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.093785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.093817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.093847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.093883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.093910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.093943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.093971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.093999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.094042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.094079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.094111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.094142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.094171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.094208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.094236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.094272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.094307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.094340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.094369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.094398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.094435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.094469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.094513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.094543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.094574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.094606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.094635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.094663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.094693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.094730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.094763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.094798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.094827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.094857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.094884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.094918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.094948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.094977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.095009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.095042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.095076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.095101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.095125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.095160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.095194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.095229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.095257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.095288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.095323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.095348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.095374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.095400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.095515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.095545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.095569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.095593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.095616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.095640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.095663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.095687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.095710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.095733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.095756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.095780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.095805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.095831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.095854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.095878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.096227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.096259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.096288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.096316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.096339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.096363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.216 [2024-12-07 05:29:13.096387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.096411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.096434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.096457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.096480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.096504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.096527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.096550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.096573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.096596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.096619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.096642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.096665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.096689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.096712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.096736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.096760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.096784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.096808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.096831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.096854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.096878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.096902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.096925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.096950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.096973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.096995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.097021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.097045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.097068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.097092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.097117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.097145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.097221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.097245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.097269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.097292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.097315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.097340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.097407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.097431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.097547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.097571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.097595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.097619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.097643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.097668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.097692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.097715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.097739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.097762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.097786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.097811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.097834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.097857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.097880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.097904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.097931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.098744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.098778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.098810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.098843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.098873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.098905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.098943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.098975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.099014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.099044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.099091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.099116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.099153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.099182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.099215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.099246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.099278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.099313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.099344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.099379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.099410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.099442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.099479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.099509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.099543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.099585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.099616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.099649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.099682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.099717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.099744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.099782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.099811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.099840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.099877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.099911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.099941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.099973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.100007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.100038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.217 [2024-12-07 05:29:13.100107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.100131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.100160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.100186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.100223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.100251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.100280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.100313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.100349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.100374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.100404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.100437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.100460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.100484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.100508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.100532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.100566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.100594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.100629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.100660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.100702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.100734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.100795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.100824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.100871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.100903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.100947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.100981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.101027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.101060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.101095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.101128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.101156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.101187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.101219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.101249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.101281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.101312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.101345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.101379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.101407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.101441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.101467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.101501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.101531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.101567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.101596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.101629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.101659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.101695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.101724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.101753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.101782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.101811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.101846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.101879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.101909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.101942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.101969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.101998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.102036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.102065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.102096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.102125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.102202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.102232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.102265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.102296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.102322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.102352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.102386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.102423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.102452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.102476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.102503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.102529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.102557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.102594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.102622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.102646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.102673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.102697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.102727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.102755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.102779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.102803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.102826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.103096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.103122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.103146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.103179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.103209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.103235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.103265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.103288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.103311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.103335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.103358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.103382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.218 [2024-12-07 05:29:13.103406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.103429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.103453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.103476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.103500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.103523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.103546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.103570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.103593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.103617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.103640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.103663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.103687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.103711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.103734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.103758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.103782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.103805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.103828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.103852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.103876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.103900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.103924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.103949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.103973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.103997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.104024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.104048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.104169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.104202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.104228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.104259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.104289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.104343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.104372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.104407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.104438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.104465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.104495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.104523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.104587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.104615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.104649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.104677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.104711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.104745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.104784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.104814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.104845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.104878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.104909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.104945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.104973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.105018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.105046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.105077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.105114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.105139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.105166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.105200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.105231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.105257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.105288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.105320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.105343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.105366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.105391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.105414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.105437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.105461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.105484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.105507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.105535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.105564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:10.219 [2024-12-07 05:29:13.105593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.105629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.105658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.105691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.105721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.105760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.105788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.105823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.105855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.105886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.105918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.105946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.105983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.106020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.106059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.106088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.106119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.106154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.106230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.106268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.106296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.106327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.219 [2024-12-07 05:29:13.106360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.106393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.106428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.106458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.106512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.106544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.106574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.106602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.106630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.106660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.106690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.106751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.106779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.107093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.107124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.107155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.107185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.107214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.107242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.107267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.107302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.107339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.107367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.107395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.107432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.107463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.107492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.107525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.107556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.107582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.107605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.107630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.107655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.107684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.107710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.107745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.107776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.107808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.107837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.107871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.107901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.107931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.107960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.107992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.108031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.108067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.108100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.108136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.108169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.108199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.108227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.108258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.108293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.108323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.108350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.108381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.108411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.108439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.108476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.108620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.108652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.108682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.108705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.108729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.108753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.108777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.108801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.108824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.108847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.108876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.108905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.108933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.108956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.108980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.109004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.109032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.109056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.109078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.109101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.109124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.109148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.109175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.109202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.109227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.109257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.109285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.109308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.109332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.109356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.220 [2024-12-07 05:29:13.109379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.109402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.109428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.109451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.109475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.109498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.109521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.109545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.109568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.109592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.109615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.109639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.109662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.109685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.109709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.109733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.109756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.109779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.109802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.109826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.109849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.109873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.109896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.109919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.109942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.109965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.109988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.110021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.110054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.110085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.110115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.110147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.110178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.110210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.110285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.110316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.110348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.110384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.110413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.110442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.110490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.110519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.110554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.110582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.110612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.110650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.110685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.110716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.110746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.110780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.110808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.111119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.111150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.111176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.111212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.111245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.111279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.111311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.111348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.111378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.111413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.111445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.111478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.111508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.111539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.111573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.111603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.111634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.111731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.111765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.111802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.111833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.111862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.111891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.111928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.111956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.111987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.112026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.112057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.112091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.112120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.112150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.112179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.112209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.112240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.112272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.112304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.112364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.112390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.112422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.112458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.112488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.112523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.112552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.112581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.112609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.112636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.112746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.221 [2024-12-07 05:29:13.112776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.112805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.112830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.112859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.112890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.112920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.112956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.112984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.113015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.113046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.113074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.113100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.113130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.113153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.113178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.113204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.113228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.113251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.113279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.113310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.113338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.113369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.113401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.113458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.113486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.113521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.113554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.113584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.113612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.113641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.113677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.113709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.113742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.113775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.113809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.113840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.113868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.113901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.113933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.113959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.113988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.114017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.114051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.114084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.114111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.114143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.114171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.114205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.114232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.114262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.114289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.114312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.114335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.114358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.114383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.114410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.114440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.114471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.114498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.114526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.114549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.114576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.114607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.114685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.114718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.114752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.114780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.114813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.114845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.114875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.114903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.114933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.114968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.115001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.115036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.115067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.115098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.115122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.115145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.115169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.115403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.115430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.115459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.115494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.115521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.115545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.115568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.115592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.115615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.115639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.115663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.115686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.115710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.115734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.115757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.115781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.115804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.115827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.222 [2024-12-07 05:29:13.115850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.115873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.115898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.115926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.115956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.115988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.116021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.116048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.116076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.116102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.116134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.116163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.116195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.116222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.116254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.116285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.116336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.116365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.116397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.116427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.116462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.116491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.116529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.116560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.116589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.116624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.116655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.116686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.116813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.116846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.116884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.116912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.116939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.116967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.117000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.117035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.117073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.117102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.117131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.117161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.117193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.117222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.117253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.117286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.117318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.117347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.117371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.117399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.117432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.117466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.117494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.117526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.117553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.117583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.117616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.117650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.117682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.117717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.117746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.117782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.117814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.117843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.117874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.117905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.117940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.117969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.118004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.118056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.118087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.118118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.118151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.118187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.118215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.118246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.118278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.118309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.118338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.118369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.118400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.118430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.118461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.118490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.118556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.118586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.118619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.118645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.118673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.118704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.118738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.118771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.118799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.223 [2024-12-07 05:29:13.118830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.119030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.119062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.119092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.119125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.119150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.119184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.119215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.119245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.119268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.119300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.119332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.119362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.119399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.119425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.119457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.119486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.119517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.119553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.119588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.119612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.119635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.119659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.119684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.119711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.119734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.119760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.119793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.119825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.119857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.119887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.119918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.119950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.119985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.120022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.120059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.120087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.120121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.120145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.120169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.120197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.120227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.120260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.120290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.120317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.120340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.120363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.120386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.120409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.120433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.120456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.120479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.120504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.120533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.120563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.120592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.120619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.120646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.120670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.120694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.120718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.120745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.120771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.120795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.121002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.121031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.121055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.121078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.121101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.121125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.121148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.121171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.121196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.121226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.121254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.121286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.121311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.121337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.121366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.121396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.121432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.121464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.121494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.121523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.121558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.224 [2024-12-07 05:29:13.121585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.121624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.121653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.121681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.121710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.121740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.121770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.121800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.121833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.121866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.121895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.121923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.121953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.121989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.122025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.122061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.122094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.122128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.122155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.122185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.122213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.122242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.122273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.122304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.122336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.122365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.122396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.122427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.122461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.122499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.122530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.122565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.122596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.122630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.122663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.122693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.122722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.122752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.122778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.122805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.122836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.122868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.122901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.123144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.123178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.123206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.123237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.123270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.123303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.123327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.123350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.123380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.123407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.123440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.123469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.123502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.123536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.123570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.123598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.123630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.123658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.123690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.123721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.123762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.123792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.123821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.123851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.123883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.123919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.123950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.123981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.124017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.124049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.124081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.124139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.124168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.124204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.124233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.124263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.124295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.124323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.124350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.124382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.124444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.124473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.124503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.124534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.124564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.124595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.124624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.124658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.124689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.124727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.124760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.124793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.124824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.124854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.124886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.225 [2024-12-07 05:29:13.124919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.124947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.124974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.125002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.125037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.125071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.125094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.125122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.125370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.125397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.125421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.125444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.125468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.125492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.125514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.125537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.125563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.125587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.125611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.125635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.125658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.125681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.125706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.125731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.125755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.125778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.125801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.125825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.125848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.125871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.125895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.125918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.125941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.125965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.125988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.126926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.127137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.127162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.127186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.127209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.127232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.127256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.127279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.127302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.127326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.127351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.127374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.127398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.127421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.127444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.127466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.127490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.127514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.128283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.128316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.128350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.128380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.128422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.128454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.128490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.128522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.128553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.128583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.226 [2024-12-07 05:29:13.128615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.128647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.128674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.128707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.128743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.128771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.128798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.128830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.128863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.128893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.128925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.128956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.128980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.129005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.129040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.129076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.129106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.129139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.129168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.129199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.129229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.129259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.129291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.129324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.129359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.129386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.129420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.129449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.129483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.129512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.129544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.129576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.129603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.129635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.129665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.129690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.129765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.129801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.129825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.129848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.129872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.129897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.129953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.129980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.130028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.130062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.130092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.130129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.130157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.130185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.130216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.130249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.130283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.130315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.130351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.130382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.130417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.130450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.130480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.130509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.130552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.130582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.130617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.130650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.130677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.130706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.130735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.130763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.130795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.130835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.130865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.130910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.130940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.130970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.131002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.131036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.131064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.131097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.131126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.131159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.131191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.131245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.131273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.131306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.131332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.131360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.131384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.131414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.131439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.131463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.131493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.131522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.131553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.131584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.131623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.131660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.131696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.131722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.227 [2024-12-07 05:29:13.131751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.131777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.131965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.131990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.132995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.133021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.133044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.133066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.133089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.133112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.133134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.133156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.133180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.133204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.133231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.133263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.133290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.133319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.133344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.133367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.133392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.133421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.133444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.133475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.133504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.133534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.133597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.133624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.133652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.133687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.133929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.133963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.133991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.134032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.134068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.134100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.134130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.134156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.134178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.134201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.134224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.134247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.134271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.134295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.134317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.134341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.134364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.134388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.134410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.134434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.134457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.134490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.134514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.134539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.134563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.134593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.134625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.134657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.134687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.134712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.134739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.134770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.228 [2024-12-07 05:29:13.134802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.134831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.134865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.134897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.134929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.134956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.134985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.135017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.135046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.135078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.135105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.135135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.135166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.135197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.135227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.135258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.135301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.135333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.135368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.135398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.135430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.135461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.135491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.135523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.135559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.135590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.135624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.135653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.135685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.135714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.135742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.135785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.136391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.136429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.136462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.136488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.136512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.136544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.136573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.136601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.136633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.136667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.136695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.136724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.136753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.136816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.136847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.136904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.136936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.136966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.136998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.137035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.137064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.137102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.137133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.137164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.137195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.137234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.137261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.137296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.137325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.137363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.137392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.137421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.137450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.137489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.137517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.137550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.137585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.137619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.137648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.137679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.137751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.137786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.137813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.137844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.137876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.137900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.137924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:10.229 [2024-12-07 05:29:13.137949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.137973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.137997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.138037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.138073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.138104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.229 [2024-12-07 05:29:13.138135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.138166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.138194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.138224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.138275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.138304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.138336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.138365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.138402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.138430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.138459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.138492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.138523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.138548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.138571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.138599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.138628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.138655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.138683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.138712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.138746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.138770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.138794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.138820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.138844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.138867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.138890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.138920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.138950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.138983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.139018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.139042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.139070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.139093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.139117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.139140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.139164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.139188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.139211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.139236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.139260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.139284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.139308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.139331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.139355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.139379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.139403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.139427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.139458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.139486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.139518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.139587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.139611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.139635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.139661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.139694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.139722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.139756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.139789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.139829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.139865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.139897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.139930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.139961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.139991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.140030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.140066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.140097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.140133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.140165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.140200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.140236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.140266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.140302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.140625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.140657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.140686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.140717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.140748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.140785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.140816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.140850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.140883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.140919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.140951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.140982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.141018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.141049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.141078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.141106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.141134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.141168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.141200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.141228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.141261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.141285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.141313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.141346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.230 [2024-12-07 05:29:13.141377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.141410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.141443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.141472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.141508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.141534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.141557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.141582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.141606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.141636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.141670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.141707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.141736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.141764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.141796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.141827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.141954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.141991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.142033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.142068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.142101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.142127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.142158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.142185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.142244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.142284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.142322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.142355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.142385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.142413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.142445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.142478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.142509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.142539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.142569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.142606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.142635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.142665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.142699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.142729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.142761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.142791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.142818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.142849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.142881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.142914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.142943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.142970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.143007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.143042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.143071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.143105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.143137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.143161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.143186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.143219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.143251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.143283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.143317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.143344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.143372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.143408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.143437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.143469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.143498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.143522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.143557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.143591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.143616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.143647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.143678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.143702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.143729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.143760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.143787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.143814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.143841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.143866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.143920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.143951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.144137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.144168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.144198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.144227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.144261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.144298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.144332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.144361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.144391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.144423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.144452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.144481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.144515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.144541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.144572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.144603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.144631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.144665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.144693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.231 [2024-12-07 05:29:13.144717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.144744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.144773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.144806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.144836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.144863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.144902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.144927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.144962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.144992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.145019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.145052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.145086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.145117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.145141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.145168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.145201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.145229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.145257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.145284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.145314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.145337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.145365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.145390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.145414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.145438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.145463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.145487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.145511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.145534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.145557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.145581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.145605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.145629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.145653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.145677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.145702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.145726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.145750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.145773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.145798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.145822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.145846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.145869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.146085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.146113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.146146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.146173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.146203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.146233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.146258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.146291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.146322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.146354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.146392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.146424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.146455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.146485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.146520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.146546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.146583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.146610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.146649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.146682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.146716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.146748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.146775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.146803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.146834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.146869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.146908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.146939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.146971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.146999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.147034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.147072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.147100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.147129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.147158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.147188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.147226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.147258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.147318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.147352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.147384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.147417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.147453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.147485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.147517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.147548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.147582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.147613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.147644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.147673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.147701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.147731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.147764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.147792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.232 [2024-12-07 05:29:13.147825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.147860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.147894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.147926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.147960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.147993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.148025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.148055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.148085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.148117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.148349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.148375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.148400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.148429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.148461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.148491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.148523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.148555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.148588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.148620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.148652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.148706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.148734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.148771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.148802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.148849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.148878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.148910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.148942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.148973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.149007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.149044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.149074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.149107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.149137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.149170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.149203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.149254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.149284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.149329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.149358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.149417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.149447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.149499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.149531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.149566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.149600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.149646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.149674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.149708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.149738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.149773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.149802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.149831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.149863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.149895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.149926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.149957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.149995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.150033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.150058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.150090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.150124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.150153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.150183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.150212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.150240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.150274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.150304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.150338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.150365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.150388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.150416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.150665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.150694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.150717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.150742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.150767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.150791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.150817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.150875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.150903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.150947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.150979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.151016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.151049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.233 [2024-12-07 05:29:13.151086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.151119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.151178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.151209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.151274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.151307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.151374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.151407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.151451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.151484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.151518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.151551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.151599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.151629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.151658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.151686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.151711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.151736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.151771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.151801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.151833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.151864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.151894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.151927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.151960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.151987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.152025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.152050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.152084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.152115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.152142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.152170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.152200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.152225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.152253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.152280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.152304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.152328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.152351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.152375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.152401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.152424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.152447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.152470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.152494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.152517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.152541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.152564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.152587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.152612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.152636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.153272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.153306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.153339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.153372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.153403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.153438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.153466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.153502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.153536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.153568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.153595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.153627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.153655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.153683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.153719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.153753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.153785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.153815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.153849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.153884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.153930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.153966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.154002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.154053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.154084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.154122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.154150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.154179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.154212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.154250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.154281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.154313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.154347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.154382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.154413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.154451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.154479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.154512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.154541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.154574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.154603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.154635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.154663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.154687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.154719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.154756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.154784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.234 [2024-12-07 05:29:13.154812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.154843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.154874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.154908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.154932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.154968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.155000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.155033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.155064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.155092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.155122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.155148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.155182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.155312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.155338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.155368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.155403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.155437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.155465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.155499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.155532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.155593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.155626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.155661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.155696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.155755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.155789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.155830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.155858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.155889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.155918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.155981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.156017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.156050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.156079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.156111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.156141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.156171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.156202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.156248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.156279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.156326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.156357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.156416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.156445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.156474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.156505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.156535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.156562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.156599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.156628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.156659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.156687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.156719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.156746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.156776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.156806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.156835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.156868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.156899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.156931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.156968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.156997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.157026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.157056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.157088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.157119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.157150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.157178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.157211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.157238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.157270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.157303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.157328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.157352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.157380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.157409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.157482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.157508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.157532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.157874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.157906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.157941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.157975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.158022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.158055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.158086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.158114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.158137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.158170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.158202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.158233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.158257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.158281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.158305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.158329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.158353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.158376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.235 [2024-12-07 05:29:13.158399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.158422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.158447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.158477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.158506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.158532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.158558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.158582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.158606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.158630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.158654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.158677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.158704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.158734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.158773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.158802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.158835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.158868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.158903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.158929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.158959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.158989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.159021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.159057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.159093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.159125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.159153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.159177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.159201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.159226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.159250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.159273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.159296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.159320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.159344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.159368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.159392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.159415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.159438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.159463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.159493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.159524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.159620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.159646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.159671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.159695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.159720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.159744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.159768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.159793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.159836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.159869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.159903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.159936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.159995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.160032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.160069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.160099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.160130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.160163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.160215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.160248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.160293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.160325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.160378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.160410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.160474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.160507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.160568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.160598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.160634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.160664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.160704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.160733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.160776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.160803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.160836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.160869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.160899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.160925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.160956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.160990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.161028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.161065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.161096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.161135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.161159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.161189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.161224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.161251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.161284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.161311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.161343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.161379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.161409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.161443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.161474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.161508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.236 [2024-12-07 05:29:13.161536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.161568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.161599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.161630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.161661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.161688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.161717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.161750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.161988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.162030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.162067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.162117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.162152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.162184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.162217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.162250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.162280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.162314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.162343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.162375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.162413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.162445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.162481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.162510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.162543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.162571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.162604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.162634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.162670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.162701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.162731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.162759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.162790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.162816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.162844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.162878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.162915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.162942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.162972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.163001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.163035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.163065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.163093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.163116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.163146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.163174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.163203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.163236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.163268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.163300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.163332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.163547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.163574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.163605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.163636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.163668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.163695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.163729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.163759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.163822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.163852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.163905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.163935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.163963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.163993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.164031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.164060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.164084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.164108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.164136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.164160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.164187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.164213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.164236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.164259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.164283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.164307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.164331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.164354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.164378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.164402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.164430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.164461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.164494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.164518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.164542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.164567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.164591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.164615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.164639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.164663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.164694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.164726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.164752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.164777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.164800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.164824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.164848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.164871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.237 [2024-12-07 05:29:13.164894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.164918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.164942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.164965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.164988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.165017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.165041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.165065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.165088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.165112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.165137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.165161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.165185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.165211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.165234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.165258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.165328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.165354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.165378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.165402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.165426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.165450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.165474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.165498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.165526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.165559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.165587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.165615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.165642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.165671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.165699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.165735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.165763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.165793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.165825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.165860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.165891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.165927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.165957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.166128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.166161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.166192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.166221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.166253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.166284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.166318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.166354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.166388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.166424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.166455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.166489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.166519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.166551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.166581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.166635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.166667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.166719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.166745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.166774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.166805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.166843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.166871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.166901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.166931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.166961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.166995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.167027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.167065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.167094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.167124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.167155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.167184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.167218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.167250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.167279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.167311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.167340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.167367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.167397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.167686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.167720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.167760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.167789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.167824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.167855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.167886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.167914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.238 [2024-12-07 05:29:13.167951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.167983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.168026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.168055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.168087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.168118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.168178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.168207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.168240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.168267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.168305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.168336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.168365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.168397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.168431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.168459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.168490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.168537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.168570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.168610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.168643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.168677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.168709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.168744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.168773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.168803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.168835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.168876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.168907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.168941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.168974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.169015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.169045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.169077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.169103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.169136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.169165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.169197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.169230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.169255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.169282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.169314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.169338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.169364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.169394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.169426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.169456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.169481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.169515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.169548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.169578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.169606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.169636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.169660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.169687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.169718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.169793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.169818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.169842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.169866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.169890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.169915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.169938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.169962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.169986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.170018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.170047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.170071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.170098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.170122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.170146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.170170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.170194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.170218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.170241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.170264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.170292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.170318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.170350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.170609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.170634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.170658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.170682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.170706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.170730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.170754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.170778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.170802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.170825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.170850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.170873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.170897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.170920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.170946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.170969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.170992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.171020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.171045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.171069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.239 [2024-12-07 05:29:13.171093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.171992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.172736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.172768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.172799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.172832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:10.240 [2024-12-07 05:29:13.172863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.172895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.172932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.172960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.172988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.173025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.173059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.173090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.173123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.173157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.173186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.173219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.173251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.173283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.173317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.173347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.173378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.173402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.173430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.173466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.173502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.173529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.173559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.173594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.173624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.173675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.173706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.173749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.173782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.173826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.173857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.173894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.173927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.173961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.173994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.174038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.174074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.174107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.174140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.174167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.174202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.174239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.174266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.174374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.174400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.174436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.174468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.174502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.174535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.174566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.174598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.174628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.174661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.174690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.174725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.174755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.240 [2024-12-07 05:29:13.174792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.174821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.174853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.174881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.174909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.174943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.174979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.175009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.175045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.175078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.175110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.175144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.175177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.175210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.175243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.175274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.175338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.175366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.175401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.175433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.175465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.175499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.175534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.175566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.175602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.175634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.175670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.175703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.175739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.175772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.175804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.175834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.175868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.175898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.175932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.175958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.175982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.176025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.176059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.176096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.176124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.176156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.176184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.176212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.176245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.176274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.176299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.176322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.176350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.176379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.176414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.176493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.176518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.176541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.176572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.176603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.176635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.176661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.176685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.176709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.176734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.176759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.176783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.176807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.176831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.176854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.176878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.177039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.177071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.177105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.177129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.177152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.177181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.177219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.177249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.177285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.177320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.177351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.177380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.177412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.177443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.177476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.177508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.177540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.177569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.177606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.177635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.177668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.177702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.177758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.177788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.177828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.177863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.177900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.177928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.177964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.177994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.178032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.241 [2024-12-07 05:29:13.178061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.178094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.178124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.178153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.178184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.178217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.178241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.178270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.178294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.178318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.178341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.178369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.178397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.178427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.178460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.178514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.178771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.178798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.178821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.178846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.178869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.178893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.178917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.178941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.178965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.178989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.179016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.179041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.179067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.179092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.179116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.179139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.179163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.179187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.179211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.179234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.179257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.179288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.179321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.179349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.179378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.179434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.179466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.179526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.179556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.179594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.179627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.179660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.179692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.179732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.179913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.179945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.179976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.180003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.180035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.180062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.180093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.180125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.180162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.180193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.180229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.180263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.180301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.180329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.180360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.180387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.180416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.180447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.180476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.180523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.180553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.180586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.180617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.180648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.180682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.180711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.180744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.180777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.180808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.180838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.180873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.180901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.180929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.180961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.180994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.181026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.242 [2024-12-07 05:29:13.181053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.181090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.181113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.181137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.181166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.181193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.181222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.181252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.181278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.181312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.181511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.181545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.181576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.181607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.181635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.181666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.181697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.181725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.181755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.181790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.181819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.181851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.181880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.181908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.181948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.181974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.182007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.182048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.182076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.182105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.182143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.182176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.182212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.182244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.182275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.182301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.182332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.182363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.182396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.182433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.182466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.182504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.182537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.182569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.182601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.182630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.182657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.182687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.182718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.182748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.182782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.182819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.182847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.182874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.182907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.182936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.182964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.182991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.183023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.183046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.183071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.183098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.183129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.183168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.183196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.183229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.183260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.183292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.183319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.183348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.183372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.183397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.183430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.183465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.183536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.183562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.183588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.183620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.183652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.183678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.183701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.183725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.183752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.183778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.183801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.183824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.183856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.183888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.183919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.183956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.183986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.184321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.184349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.184373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.184399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.184429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.184458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.184488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.243 [2024-12-07 05:29:13.184515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.184542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.184565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.184589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.184616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.184643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.184677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.184704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.184735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.184786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.184816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.184847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.184880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.184911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.184940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.184972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.185981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.186004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.186038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.186062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.186085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.186108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.186132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.186155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.186179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.186206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.186232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.186265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.186304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.186333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.186369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.186402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.186432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.186463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.186490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.186521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.186551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.186588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.186614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.186648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.186680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.186709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.186736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.186765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.186794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.186825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.186855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.186887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.186915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.186943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.186971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.187003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.187037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.187071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.187101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.187131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.187160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.187188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.187220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 true 00:14:10.244 [2024-12-07 05:29:13.187283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.187315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.187353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.187384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.244 [2024-12-07 05:29:13.187419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.187449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.187477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.187510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.187540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.187757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.187790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.187817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.187852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.187889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.187916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.187939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.187970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.187996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.188027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.188059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.188091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.188120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.188150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.188178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.188203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.188237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.188260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.188285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.188319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.188350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.188384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.188412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.188448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.188478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.188506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.188534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.188568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.188602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.188633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.188664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.188695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.188728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.188760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.188795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.188827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.188861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.188893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.188948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.188978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.189009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.189045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.189082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.189110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.189141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.189175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.189203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.189251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.189281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.189311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.189345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.189378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.189406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.189438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.189466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.189500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.189527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.189560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.189592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.189624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.189654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.189688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.189716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.189969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.190002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.190036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.190069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.190105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.190133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.190168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.190199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.190227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.190261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.190295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.190325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.190349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.190372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.190395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.190420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.190445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.190471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.190497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.190521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.190544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.190566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.190590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.190613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.190639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.190662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.190686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.190709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.190733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.190757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.245 [2024-12-07 05:29:13.190780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.190803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.190826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.190853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.190883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.190909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.190939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.190967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.190993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.191021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.191044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.191068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.191091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.191115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.191138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.191161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.191185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.191208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.191233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.191256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.191279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.191302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.191325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.191348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.191371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.191395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.191419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.191443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.191473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.191508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.191541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.191570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.191596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.191620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.191828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.191853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.191877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.191901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.191925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.191948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.191971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.191998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.192989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.193025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.193057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.193085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.193116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.193145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.193175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.193205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.193236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.193289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.193315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.193349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.193382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.193414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.193443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.193478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.193516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.193544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.193795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.246 [2024-12-07 05:29:13.193827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.193851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.193874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.193904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.193936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.193961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.193995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.194039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.194072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.194105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.194137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.194168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.194201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.194245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.194272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.194303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.194332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.194359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.194391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.194423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.194455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.194486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.194523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.194553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.194582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.194611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.194645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.194673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.194702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.194732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.194766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.194797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.194826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.194858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.194889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.194921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.194955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.194986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.195023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.195052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.195081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.195109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.195140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.195169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.195204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.195232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.195283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.195312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.195346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.195378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.195406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.195438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.195467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.195498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.195529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.195565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.195598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.195650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.195678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.195711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.195744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.195779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.195807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.196051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.196081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.196110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.196140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.196169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.196193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.196221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.196253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.196287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.196320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.196349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.196384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.196413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.196440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.196472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.196499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.196522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.196553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.196586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.196622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.196656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.196685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.196717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.196756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.196779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.196802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.196825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.196849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.196873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.247 [2024-12-07 05:29:13.196897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.196923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.196949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.196973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.196996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.197023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.197046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.197070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.197092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.197116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.197140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.197163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.197186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.197211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.197234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.197258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.197282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.197305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.197329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.197356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.197381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.197405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.197428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.197451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.197474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.197497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.197520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.197544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.197567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.197590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.197614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.197638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.197661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.197684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.197884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.197908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.197932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.197956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.197979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.198982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.199022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.199051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.199082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.199117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.199149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.199176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.199205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.199231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.199269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.199300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.199326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.199359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.199385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.199416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.199448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.199479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.199508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.199561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.199591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.199632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.200054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.200085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.200117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.200149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.200182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.200213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.200247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.200276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.248 [2024-12-07 05:29:13.200307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.200340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.200368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.200396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.200424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.200453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.200484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.200507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.200538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.200565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.200594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.200620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.200654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.200690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.200724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.200752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.200783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.200813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.200837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.200861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.200896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.200936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.200966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.200996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.201030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.201063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.201091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.201127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.201157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.201187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.201218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.201248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.201282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.201312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.201342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.201376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.201404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.201432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.201468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.201497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.201530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.201561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.201586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.201609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.201633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.201659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.201685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.201718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.201748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.201778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.201806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.201835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.202087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.202122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.202157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.202190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.202218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.202249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.202277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.202307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.202335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.202398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.202426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.202458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.202485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.202520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.202549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.202573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.202596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.202623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.202653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.202681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.202711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.202755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.202784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.202814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.202846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.202873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.202901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.202924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.202948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.202978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.203020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.203051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.203074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.203103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.203130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.203160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.203190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.203219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.203246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.203269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.203292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.203319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.203342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.203366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.203391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.203414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.203438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.203461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.203484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.203507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.203530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.203554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.203578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.203601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.203629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.249 [2024-12-07 05:29:13.203657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.203683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.203713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.203738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.203766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.203795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.203826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.203857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.203891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.203971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.203999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.204035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.204399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.204434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.204466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.204497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.204524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.204553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.204587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.204619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.204653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.204684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.204718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.204750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.204777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.204808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.204836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.204867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.204898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.204930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.204962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.204997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.205037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.205066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.205094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.205121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.205150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.205183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.205217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.205247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.205277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.205312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.205345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.205377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.205401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.205429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.205466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.205494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.205524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.205551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.205578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.205609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.205637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.205667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.205695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.205720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.205744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.205776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.205806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.205834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.205865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.205894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.205925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.205958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.205990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.206029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.206063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.206091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.206124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.206156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.206189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.206230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.206324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.206360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.206388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.206419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.206450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:10.250 [2024-12-07 05:29:13.206485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.206511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.206541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.206569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.206602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.206629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.206659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.206690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.206723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.206759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.206791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.206824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.206854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.206885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.206913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.206943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.206976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.207005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.207043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.207073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.207108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.207145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.207176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.207209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.207240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.207264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.207296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.207326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.250 [2024-12-07 05:29:13.207355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.207385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.207414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.207449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.207485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.207508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.207535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.207559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.207584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.207607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.207630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.207655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.207680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.207710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.207743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.207776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.207810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.207844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.207881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.207905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.207928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.207959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.207992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.208019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.208045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.208069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.208093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.208117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.208141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.208164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.208187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.208276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.208305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.208333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.208727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.208778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.208805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.208840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.208870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.208898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.208927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.208957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.208987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.209021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.209052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.209077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.209100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.209123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.209147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.209170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.209195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.209218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.209242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.209265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.209289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.209317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.209350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.209377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.209404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.209430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.209464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.209493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.209525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.209556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.209587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.209619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.209655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.209683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.209713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.209742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.209768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.209798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.209829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.209864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.209893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.209925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.209952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.209982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.210018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.210058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.210093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.210156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.210187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.210222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.210253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.210281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.210315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.210352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.210379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.210409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.210444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.210476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.210530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.210556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.210642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.251 [2024-12-07 05:29:13.210673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.210704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.210732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.210758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.210788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.210818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.210844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.210875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.210905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.210939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.210975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.211004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.211040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.211070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.211101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.211132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.211156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.211183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.211210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.211240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.211268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.211301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.211331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.211365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.211404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.211433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.211460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.211483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.211509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.211537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.211562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.211585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.211614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.211647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.211675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.211709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.211737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.211767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.211799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.211831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.211862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.211896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.211929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.211964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.211991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.212027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.212058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.212084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.212107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.212130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.212153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.212182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.212215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.212249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.212278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.212312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.212341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.212369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.212403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.212434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.212464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.212493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.212523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.212764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.212795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.212826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.212856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.212887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.212924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.212953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.212984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.213018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.213061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.213090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.213122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.213157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.213183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.213212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.213241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.213279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.213311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.213339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.213362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 05:29:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:10.252 [2024-12-07 05:29:13.213393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.213425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.213453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.213482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.213516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.213549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.213579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.213611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.213642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.213672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 05:29:13 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.252 [2024-12-07 05:29:13.213748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.213777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.213805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.213832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.213860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.213886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.213921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.213952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.213993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.214033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.214064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.252 [2024-12-07 05:29:13.214091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.214120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.214151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.214180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.214213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.214246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.214273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.214302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.214331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.214364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.214396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.214425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.214465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.214496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.214528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.214559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.214590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.214618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.214646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.214672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.214705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.214729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.214963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.214988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.215030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.215060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.215105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.215133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.215171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.215202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.215231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.215259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.215289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.215322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.215351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.215387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.215418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.215447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.215477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.215511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.215543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.215572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.215604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.215633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.215665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.215697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.215729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.215760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.215795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.215824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.215851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.215879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.215903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.215928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.215961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.215992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.216033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.216056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.216088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.216118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.216149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.216173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.216196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.216226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.216256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.216285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.216314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.216338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.216364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.216392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.216423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.216448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.216473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.216496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.216519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.216541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.216565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.216588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.216613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.216642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.216674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.216704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.216732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.216764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.216796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.216828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.217085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.217115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.217147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.217177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.217206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.217237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.217269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.217299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.217334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.217364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.217392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.217422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.217451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.217475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.217505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.217538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.217565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.217593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.217621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.217652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.217675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.217699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.217723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.217746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.217770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.217793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.217819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.217842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.217873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.253 [2024-12-07 05:29:13.217901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.217935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.217964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.218000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.218043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.218078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.218112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.218142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.218174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.218204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.218234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.218269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.218303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.218339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.218371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.218399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.218427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.218462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.218494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.218524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.218560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.218587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.218620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.218650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.218706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.218733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.218771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.218798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.218829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.218865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.218894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.218957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.218992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.219025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.219274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.219306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.219340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.219373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.219403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.219430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.219463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.219498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.219529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.219556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.219594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.219619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.219651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.219681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.219714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.219742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.219768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.219800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.219828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.219857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.219888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.219917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.219940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.219965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.219996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.220029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.220061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.220091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.220152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.220180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.220208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.220238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.220267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.220296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.220328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.220358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.220389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.220420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.220448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.220477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.220507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.220536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.254 [2024-12-07 05:29:13.220562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.220589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.220621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.220652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.220683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.220715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.220744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.220774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.220807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.220842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.220870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.220900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.220930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.220959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.220987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.221025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.221049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.221073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.221096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.221119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.221143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.221171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.221415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.221448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.221481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.221510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.221540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.221574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.221609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.221639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.221669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.221702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.221735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.221764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.221807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.221834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.221858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.221883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.221914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.221942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.221970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.222996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.223026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.223050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.255 [2024-12-07 05:29:13.223073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.223096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.223301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.223327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.223350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.223373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.223396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.223420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.223445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.223468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.223492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.223516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.223544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.223571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.223599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.223622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.223652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.223684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.223713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.223937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.223969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.224020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.224051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.224079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.224110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.224156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.224184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.224219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.224248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.224278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.224306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.224336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.224368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.224398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.224437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.224462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.224500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.224533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.224572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.224600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.224629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.224662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.224697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.224729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.224759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.224793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.224824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.224863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.224891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.224930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.224960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.224986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.225027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.225056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.225085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.225113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.225138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.225168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.225199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.225239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.225268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.225299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.225335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.225364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.225387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.225414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.225533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.225566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.225599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.225630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.225658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.225684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.225707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.225731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.225762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.225792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.225819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.225843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.225878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.225911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.225947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.225975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.226029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.226064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.226099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.226133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.226165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.226206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.226242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.226278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.226308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.226341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.226369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.226398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.226435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.226463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.226498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.226532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.256 [2024-12-07 05:29:13.226574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.226601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.226635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.226663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.226700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.226729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.226758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.226787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.226821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.226853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.226885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.226923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.226954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.226987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.227023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.227054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.227082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.227116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.227145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.227179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.227212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.227246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.227272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.227298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.227325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.227360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.227389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.227415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.227439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.227467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.227495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.227688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.227725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.227748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.227774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.227805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.227834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.227864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.227899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.227923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.227946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.227972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.227996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.228977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.229001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.229034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.229067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.229101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.229127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.229154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.229183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.229213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.229244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.229272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.229300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.229330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.229576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.229611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.229648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.257 [2024-12-07 05:29:13.229681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.229710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.229741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.229775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.229808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.229840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.229867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.229896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.229931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.229963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.229993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.230032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.230064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.230094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.230662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.230698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.230730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.230782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.230813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.230846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.230874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.230909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.230934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.230962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.230995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.231030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.231060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.231099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.231127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.231158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.231189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.231220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.231255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.231284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.231318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.231352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.231390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.231423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.231451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.231479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.231511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.231548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.231578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.231607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.231639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.231673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.231701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.231733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.231761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.231793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.231825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.231853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.231883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.231914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.231940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.231970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.231998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.232035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.232062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.232091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.232170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.232200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.232237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.232264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.232297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.232329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.232355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.232381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.232403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.232429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.232455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.232478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.232502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.232532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.232567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.232596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.232626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.232663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.232695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.232727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.232754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.232784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.232817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.232846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.232878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.232907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.232938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.232966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.232996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.233031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.233063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.233093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.233122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.233156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.233186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.233210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.233235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.233265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.258 [2024-12-07 05:29:13.233299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.233328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.233357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.233393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.233422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.233452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.233481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.233512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.233536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.233561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.233584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.233610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.233637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.233664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.233699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.233728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.233754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.233777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.233801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.233825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.233848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.233871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.233894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.233917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.233941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.233964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.234042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.234068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.234091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.234115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.234139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.234163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.234187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.234211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.234233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.234257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.234280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.234308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.234337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.234370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.234393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.234427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.234458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.234784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.234813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.234840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.234873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.234906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.234937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.234968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.235003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.235035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.235066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.235099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.235130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.235162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.235193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.235222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.235257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.235288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.235326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.235355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.235394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.235426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.235456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.235484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.235514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.235544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.235575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.235610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.235638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.235679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.235710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.235760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.235788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.235819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.235849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.235880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.235910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.235942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.235967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.235998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.236041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.236071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.236095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.236118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.236150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.236180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.236215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.236335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.236367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.236393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.236417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.236442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.236472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.236500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.236524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.236547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.236577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.259 [2024-12-07 05:29:13.236611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.236639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.236667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.236696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.236726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.236787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.236818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.236853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.236889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.236934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.236966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.237004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.237038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.237073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.237106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.237142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.237172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.237212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.237240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.237277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.237305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.237334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.237360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.237385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.237414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.237442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.237466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.237492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.237524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.237553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.237584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.237619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.237644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.237667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.237690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.237719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.237747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.237779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.237813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.237851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.237883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.237924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.237953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.237983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.238029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.238062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.238093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.238131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.238162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.238193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.238222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.238256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.238286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.238336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.238528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.238554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.238579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.238610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.238643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.238669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.238696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.238724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.238756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.238782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.238812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.238835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.238862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.238893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.238919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.238953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.238982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.239017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.239048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.239079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.239113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:10.260 [2024-12-07 05:29:13.239143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.239172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.239203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.239236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.239266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.239296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.239325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.239354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.239385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.260 [2024-12-07 05:29:13.239422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.239450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.239483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.239512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.239543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.239575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.239610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.239638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.239665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.239691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.239714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.239743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.239774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.239803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.239833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.239867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.239897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.239928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.239955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.239984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.240008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.240041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.240073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.240102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.240125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.240150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.240173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.240210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.240244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.240272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.240305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.240336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.240369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.240647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.240678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.240737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.240766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.240814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.240845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.240899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.240929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.240962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.240993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.241027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.241062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.241093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.241124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.241160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.241191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.241220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.241246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.241270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.241295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.241325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.241353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.241390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.241416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.241473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.241505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.241541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.241570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.241604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.241636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.241671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.241704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.241733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.241766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.241797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.241828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.241857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.241891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.241922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.241985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.242021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.242053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.242083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.242112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.242148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.242175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.242204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.242233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.242270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.242297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.242326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.242355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.242388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.242411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.242440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.242480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.242505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.242532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.242556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.242585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.242615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.242646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.242678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.242709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.242957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.261 [2024-12-07 05:29:13.242986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.243026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.243059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.243089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.243116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.243146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.243175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.243206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.243236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.243269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.243299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.243331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.243364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.243395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.243428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.243457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.243486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.243517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.243543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.243575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.243607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.243644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.243675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.243709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.243732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.243759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.243785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.243814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.243847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.243876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.243902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.243930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.243961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.243988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.244026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.244051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.244081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.244106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.244129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.244157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.244186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.244212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.244236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.244264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.244297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.244327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.244372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.244404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.244442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.244473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.244507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.244538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.244568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.244596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.244631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.244656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.244685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.244722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.244751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.244781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.244808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.244832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.245079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.245114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.245155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.245187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.245218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.245250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.245285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.245317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.245346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.245377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.245411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.245451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.245483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.245516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.245550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.245578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.245610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.245646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.245681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.245708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.245738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.245761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.245786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.245817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.245852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.245878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.245912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.245939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.245970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.246000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.246037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.246062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.246093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.246126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.246159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.246191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.262 [2024-12-07 05:29:13.246222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.246249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.246273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.246297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.246327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.246351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.246374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.246398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.246421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.246445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.246469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.246496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.246524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.246555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.246584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.246619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.246648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.246704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.246734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.246765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.246795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.246823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.246852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.246881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.246914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.246945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.246977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.247009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.247262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.247298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.247333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.247364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.247396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.247425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.247453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.247481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.247509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.247540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.247570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.247603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.247631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.247665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.247699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.247724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.247748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.247776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.247814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.247851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.247880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.247902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.247927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.247950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.247973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.247996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.248024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.248048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.248074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.248100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.248134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.248163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.248187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.248215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.248478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.248535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.248565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.248623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.248656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.248690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.248724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.248753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.248781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.248810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.248844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.248871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.248902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.248933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.248971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.249000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.249033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.249056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.249082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.249115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.249147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.249179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.249222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.249254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.249295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.249324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.249385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.249415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.249449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.249479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.249510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.249544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.249570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.249607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.249639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.249675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.249707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.263 [2024-12-07 05:29:13.249734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.249765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.249799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.249828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.249857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.249891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.249923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.249951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.249975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.250000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.250037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.250066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.250094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.250119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.250149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.250179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.250209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.250243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.250272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.250308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.250340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.250371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.250409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.250438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.250473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.250507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.250559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.250633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.250663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.250697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.250727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.250764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.250790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.250840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.250868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.250902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.250934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.250963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.250991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.251026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.251055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.251092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.251120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.251157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.251190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.251214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.251243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.251276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.251311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.251336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.251367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.251398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.251424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.251450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.251480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.251512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.251836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.251869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.251904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.251934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.251966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.251998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.252034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.252062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.252095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.252132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.252160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.252191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.252224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.252256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.252287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.252317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.252357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.252388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.252416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.252447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.252478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.252508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.252536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.252559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.252585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.252616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.252645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.252676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.252712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.252741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.252769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.252796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.264 [2024-12-07 05:29:13.252821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.252850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.252874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.252900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.252926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.252954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.252986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.253022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.253051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.253079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.253108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.253135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.253161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.253195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.253224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.253258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.253290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.253325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.253358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.253494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.253522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.253557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.253586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.253612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.253645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.253670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.253694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.253719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.253781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.253810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.253844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.253874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.253920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.253953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.253989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.254030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.254064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.254114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.254143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.254194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.254225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.254284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.254312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.254371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.254398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.254423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.254455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.254484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.254519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.254542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.254566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.254598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.254633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.254661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.254689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.254718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.254745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.254773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.254799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.254826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.254850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.254882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.254913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.254941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.254965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.254989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.255023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.255064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.255095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.255132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.255158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.255191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.255223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.255252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.255281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.255311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.255344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.255377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.255405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.255435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.255463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.255492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.255519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.255597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.255627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.255661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.255695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.255730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.255757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.255787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.255814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.255842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.255871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.255907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.255936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.256106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.256140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.256172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.265 [2024-12-07 05:29:13.256197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.256227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.256259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.256286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.256320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.256353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.256389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.256418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.256470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.256502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.256532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.256564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.256600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.256631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.256659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.256693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.256723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.256750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.256780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.256810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.256843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.256878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.256906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.256957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.256983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.257019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.257051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.257087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.257121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.257154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.257183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.257213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.257240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.257272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.257302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.257330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.257361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.257394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.257425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.257449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.257472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.257501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.257536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.257566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.257593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.257626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.257661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.257685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.257957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.257989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.258027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.258059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.258088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.258118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.258148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.258176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.258212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.258242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.258278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.258308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.258339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.258369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.258395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.258423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.258458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.258489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.258514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.258537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.258565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.258592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.258623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.258661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.258692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.258720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.258746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.258774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.258799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.258825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.258850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.258875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.258906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.258934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.258961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.258991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.259023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.259055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.259083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.259112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.259140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.259167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.259196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.259229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.259262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.259293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.259322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.259348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.259371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.259400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.266 [2024-12-07 05:29:13.259431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.259459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.259489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.259513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.259537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.259560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.259583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.259606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.259636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.259665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.259696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.259723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.259749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.259774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.259849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.259886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.259917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.259945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.259976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.260019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.260050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.260080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.260109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.260137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.260169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.260200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.260542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.260577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.260608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.260637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.260669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.260697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.260728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.260759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.260793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.260817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.260842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.260869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.260898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.260930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.260964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.260987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.261016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.261040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.261063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.261091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.261123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.261155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.261180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.261205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.261228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.261251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.261274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.261297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.261321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.261344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.261367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.261390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.261423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.261455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.261489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.261521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.261548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.261577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.261610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.261640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.261670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.261728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.261759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.261790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.261820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.261851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.261882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.261911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.261942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.261970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.262027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.262138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.262182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.262212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.262246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.262279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.262312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.262341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.262371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.262399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.262432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.262467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.262497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.262525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.262556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.262585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.262609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.262637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.262668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.262693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.262726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.262754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.262782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.267 [2024-12-07 05:29:13.262815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.262861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.262891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.262924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.262951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.262979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.263018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.263048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.263076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.263105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.263139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.263173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.263208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.263238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.263268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.263299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.263327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.263356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.263391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.263419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.263450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.263480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.263514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.263545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.263576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.263603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.263636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.263672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.263702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.263735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.263764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.263804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.263836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.263866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.263898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.263933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.263963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.263993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.264032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.264061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.264093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.264122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.264352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.264377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.264403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.264427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.264450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.264483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.264512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.264550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.264580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.264612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.264641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.264670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.264698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.264733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.264781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.264811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.264837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.264867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.264900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.264932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.264961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.264990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.265025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.265058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.265087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.265117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.265152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.265184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.265213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.265248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.265274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.265304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.265333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.265363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.265388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.265412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.265436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.265470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.265501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.265529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.265558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.265586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.265615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.265648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.265672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.265697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.265722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.265746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.265769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.265792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.265815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.265838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.265861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.265885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.265909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.265932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.268 [2024-12-07 05:29:13.265957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.265980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.266004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.266042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.266074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.266100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.266124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.266375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.266413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.266445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.266473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.266501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.266529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.266563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.266596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.266656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.266688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.266717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.266749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.266781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.266815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.266848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.266875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.266904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.266935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.266972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.266995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.267031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.267063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.267093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.267120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.267149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.267205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.267237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.267269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.267298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.267332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.267365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.267395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.267423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.267454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.267494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.267522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.267552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.267585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.267624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.267650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.267673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.267701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.267735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.267761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.267785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.267809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.267833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.267856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.267879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.267902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.267925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.267955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.267987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.268019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.268048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.268076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.268110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.268138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.268175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.268203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.268233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.268267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.268303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.268334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.268748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.268783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.268812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.268848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.268880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.268910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.268945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.268972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.269003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.269037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.269064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.269095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.269127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.269162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.269191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.269 [2024-12-07 05:29:13.269215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.269242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.269272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.269301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.269331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.269356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.269385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.269408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.269431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.269463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.269492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.269524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.269554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.269581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.269615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.269650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.269678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.269722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.269755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.269807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.269834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.269868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.269895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.269927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.269959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.270202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.270237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.270265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.270294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.270322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.270349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.270383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.270421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.270451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.270478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.270506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.270529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.270556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.270589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.270617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.270644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.270674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.270699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.270723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.270746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.270772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.270800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.270829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.270852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.270882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.270913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.270952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.270986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.271018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.271049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.271081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.271117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.271157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.271188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.271223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.271257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.271304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.271335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.271359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.271383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.271412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.271449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.271477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.271509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.271546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.271571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.271594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.271617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.271649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.271677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.271706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.271738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.271772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.271799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.271823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.271855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.271920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.272111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.272139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.272173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.272209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.272239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.272274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.272302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.272337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.272366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.272401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.272439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.272469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.272500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.272531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.272591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.272624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.270 [2024-12-07 05:29:13.272652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.272683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.272714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.272743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.272776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.272809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.272833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.272856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.272887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.272915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.272944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.272975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.273006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.273043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.273076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.273100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.273123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.273147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.273173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.273201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.273232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.273258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.273281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.273305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.273328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.273351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.273375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.273398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.273432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.273462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.273497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.273528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.273553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.273585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.273615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.273653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.273682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.273714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.273746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.273791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.273820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.273852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.273886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.273918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.273948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.273977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.274006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.274049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.274146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.274178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.274204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.274241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.274270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.274305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.274510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.274539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.274568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:10.271 [2024-12-07 05:29:13.274598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.274631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.274658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.274688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.274726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.274759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.274789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.274820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.274843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.274869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.274900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.274930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.274961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.274991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.275022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.275056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.275081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.275105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.275130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.275160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.275187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.275211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.275243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.275274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.275303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.275342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.275372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.275402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.275431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.275467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.275496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.275525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.275554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.275582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.275616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.275646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.275713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.275743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.275782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.275812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.275844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.271 [2024-12-07 05:29:13.275873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.275903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.275935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.275969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.276022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.276054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.276085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.276116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.276163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.276193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.276224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.276255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.276289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.276531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.276565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.276593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.276630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.276657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.276685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.276711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.276734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.276759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.276789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.276819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.276848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.276876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.276900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.276926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.276953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.276981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.277999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.278032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.278063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.278095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.278126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.278160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.278189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.278280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.278313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.278348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.278378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.278427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.278457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.278638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.278672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.278699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.278727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.278755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.278787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.278822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.278854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.278895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.278926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.278967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.278997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.279031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.279058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.279090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.272 [2024-12-07 05:29:13.279119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.279151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.279400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.279428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.279459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.279489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.279512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.279542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.279575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.279604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.279634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.279665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.279695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.279722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.279756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.279779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.279802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.279826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.279855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.279888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.279918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.279955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.279981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.280018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.280051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.280083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.280113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.280143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.280172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.280205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.280241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.280265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.280320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.280350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.280386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.280417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.280448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.280481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.280511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.280540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.280567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.280604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.280724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.280757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.280785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.280825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.280856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.280890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.280924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.280958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.280988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.281026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.281059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.281095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.281139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.281169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.281202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.281232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.281265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.281295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.281322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.281351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.281383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.281406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.281436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.281467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.281496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.281528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.281558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.281582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.281611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.281640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.281672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.281701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.281725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.281752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.281785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.281819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.281846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.281875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.281898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.281922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.281949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.281975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.282006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.282039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.273 [2024-12-07 05:29:13.282064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.282087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.282119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.282155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.282186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.282216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.282246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.282308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.282335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.282368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.282397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.282428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.282462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.282492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.282534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.282567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.282600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.282633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.282666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.282698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.282887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.282918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.282948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.282977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.283981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.284005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.284033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.284057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.284080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.284103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.284126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.284150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.284173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.284196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.284219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.284243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.284266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.284289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.284312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.284335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.284358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.284380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.284404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.284427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.284451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.284474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.284673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.284698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.284722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.284745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.284768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.284791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.284815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.284839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.284871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.284897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.284927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.284953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.284981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.285016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.285048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.274 [2024-12-07 05:29:13.285075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.285103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.285312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.285349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.285380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.285414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.285445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.285476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.285505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.285533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.285563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.285590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.285622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.285654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.285688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.285718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.285765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.285793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.285829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.285857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.285890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.285924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.285954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.285985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.286023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.286063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.286098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.286130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.286163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.286192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.286224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.286256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.286286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.286316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.286344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.286373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.286403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.286434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.286463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.286492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.286522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.286560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.286586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.286617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.286650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.286676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.286711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.286739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.286770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.286894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.286930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.286957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.286981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.287006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.287036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.287060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.287094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.287127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.287163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.287192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.287225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.287256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.287282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.287310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.287337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.287373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.287404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.287436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.287463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.287500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.287529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.287560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.287592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.287635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.287664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.287696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.287723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.287749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.287783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.287813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.287841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.287877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.287908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.287938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.287973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.288002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.288066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.288098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.288136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.288166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.288198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.288228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.288260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.288292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.288318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.288346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.288379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.275 [2024-12-07 05:29:13.288425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.288458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.288488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.288515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.288547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.288580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.288607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.288632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.288656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.288682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.288717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.288746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.288778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.288808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.288840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.289984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.290024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.290055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.290085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.290118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.290152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.290184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.290220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.290251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.290285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.290314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.290348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.290380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.290412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.290442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.290471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.290498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.290528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.290559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.290590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.290626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.290655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.290686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.290719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.290744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.290776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.290807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.290837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.291046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.291071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.291095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.291118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.291142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.291165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.291189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.291214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.291237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.291261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.291284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.291306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.291330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.291356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.291379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.291402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.291425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.291449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.276 [2024-12-07 05:29:13.291471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.291494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.291517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.291540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.291563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.291587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.291611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.291634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.291662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.291690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.291717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.291744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.291773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.291805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.291846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.291879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.292123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.292155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.292183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.292215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.292245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.292277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.292340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.292370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.292404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.292434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.292467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.292496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.292523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.292556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.292585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.292618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.292650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.292686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.292717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.292752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.292785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.292814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.292845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.292877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.292910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.292939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.292971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.293003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.293047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.293194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.293225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.293256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.293286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.293314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.293347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.293378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.293412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.293442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.293471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.293503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.293529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.293560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.293594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.293624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.293658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.293689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.293722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.293748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.293774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.293797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.293826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.293860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.293887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.293918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.293951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.293981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.294016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.294074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.294102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.294134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.294163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.294191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.294221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.294253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.294284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.294313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.294349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.294379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.294413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.294442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.294475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.294503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.294534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.294563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.294592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.294621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.294649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.294684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.294718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.294750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.294778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.294802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.277 [2024-12-07 05:29:13.294825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.294848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.294871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.294897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.294928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.294955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.294989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.295029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.295061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.295089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.295121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.295338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.295370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.295398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.295428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.295473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.295500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.295528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.295559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.295590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.295619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.295643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.295672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.295705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.295733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.295766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.295796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.295831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.295859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.295891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.295922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.295953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.295978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.296990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.297017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.297241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.297273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.297303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.297334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.297364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.297397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.297428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.297464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.297492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.297522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.297554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.297582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.278 [2024-12-07 05:29:13.297614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.297646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.297675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.297707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.297753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.297966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.298000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.298036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.298071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.298122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.298154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.298183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.298213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.298241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.298277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.298307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.298339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.298372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.298406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.298434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.298473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.298507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.298559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.298586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.298618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.298645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.298678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.298706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.298737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.298778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.298807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.298836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.298872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.298902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.298931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.298961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.298992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.299021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.299051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.299079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.299106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.299136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.299169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.299199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.299223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.299255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.299281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.299311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.299335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.299358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.299384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.299408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.299538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.299572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.299606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.299638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.299672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.299700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.299731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.299760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.299794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.299822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.299855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.299884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.299917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.299954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.299986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.300019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.300051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.300087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.300114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.300171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.300202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.300239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.300268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.300305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.300334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.300370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.300400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.300438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.300468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.300499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.300531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.300564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.300614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.300648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.300682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.300714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.300742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.300777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.300807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.300837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.300869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.300900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.300930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.300962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.300986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.301013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.279 [2024-12-07 05:29:13.301043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.301073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.301103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.301131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.301163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.301197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.301227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.301256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.301289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.301319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.301343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.301378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.301406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.301437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.301468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.301499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.301522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.301545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.301623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.301649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.301682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.301723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.301754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.301791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.301821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.301852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.301883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.301912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.301939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.301970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.302008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.302047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.302080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.302114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.302439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.302473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.302502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.302537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.302563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.302595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.302627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.302657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.302684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.302719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.302747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.302771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.302794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.302824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.302851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.302879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.302908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.302938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.302969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.302992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.303986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.304009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.304038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.304061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.304084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.304108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.304130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.280 [2024-12-07 05:29:13.304153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.304177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.304201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.304224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.304248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.304271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.304298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.304325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.304354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.304383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.304411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.304440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.304472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.304504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.304538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.304567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.304598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.304626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.304664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.304693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.304723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.304754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.304782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.304811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.304844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.304873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.304904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.304933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.304967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.305030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.305061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.305098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.305130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.305168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.305197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.305237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.305267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.305295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.305329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.305358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.305396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.305430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.305469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.305499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.305535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.305570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.305623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.305830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.305861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.305889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.305930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.305958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.305989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.306025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.306062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.306096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.306120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.306146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.306178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.306210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.306238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.306270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.306303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.306336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.306365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.306398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.306428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.306451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.306477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.306501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.306525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.306552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.306580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.306608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.306643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.306674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.306711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.306741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.306773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.306803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.306837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.306867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.306900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.306932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.306962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.306996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.307032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.307065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.307095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.307159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.307191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.307227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.307255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.307282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.307313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.307346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.307377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.307406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.281 [2024-12-07 05:29:13.307436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.307464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.307509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.307538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.307574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.307604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.307629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.307679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.307713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.307755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.307783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.307815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:10.282 [2024-12-07 05:29:13.308066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.308091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.308114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.308143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.308171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.308202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.308227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.308258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.308287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.308323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.308350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.308375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.308401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.308423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.308453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.308483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.308512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.308540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.308576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.308598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.308620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.308642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.308666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.308688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.308711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.308734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.308756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.308779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.308805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.308835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.308863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.308886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.308913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.308936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.308960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.308983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.309006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.309034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.309058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.309081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.309104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.309128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.309150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.309173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.309195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.309218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.309240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.309262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.309286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.309310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.309339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.309369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.309395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.309418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.309441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.309463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.309486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.309508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.309532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.309555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.309584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.309614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.309641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.309671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.309914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.309949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.309982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.310022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.310056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.310085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.310114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.310146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.310180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.310232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.282 [2024-12-07 05:29:13.310261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.310296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.310325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.310357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.310389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.310424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.310456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.310485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.310516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.310543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.310575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.310608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.310639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.310668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.310699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.310722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.310745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.310768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.310792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.310816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.310841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.310863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.310896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.310923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.311163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.311197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.311238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.311267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.311298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.311330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.311362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.311389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.311425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.311458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.311489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.311521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.311553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.311578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.311609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.311640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.311667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.311704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.311732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.311765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.311793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.311823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.311860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.311888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.311919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.311950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.311983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.312025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.312060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.312201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.312249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.312280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.312315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.312345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.312376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.312405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.312437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.312468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.312504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.312536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.312566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.312597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.312628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.312655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.312685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.312725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.312761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.312800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.312831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.312865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.312894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.312930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.312963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.312998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.313036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.313067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.313100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.313127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.313158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.313185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.313219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.313250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.313285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.313318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.313350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.313387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.313416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.313446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.313473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.313505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.313542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.313577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.313601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.313624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.283 [2024-12-07 05:29:13.313661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.313693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.313727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.313756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.313789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.313822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.313855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.313883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.313913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.313937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.313966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.314002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.314039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.314066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.314090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.314119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.314153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.314184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.314211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.314376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.314412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.314445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.314474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.314506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.314537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.314568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.314594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.314618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.314644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.314675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.314707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.314735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.314759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.314783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.314807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.314835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.314868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.314901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.314937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.314967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.315987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.316015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.316038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.316062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.316085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.316289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.316315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.316339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.316363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.316388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.316412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.316435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.316459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.316484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.316507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.316531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.316555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.316578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.316602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.316627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.316658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.284 [2024-12-07 05:29:13.316691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.316719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.316749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.316785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.316815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.316849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.316883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.316941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.316972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.317014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.317044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.317086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.317116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.317163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.317191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.317223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.317256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.317284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.317315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.317348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.317383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.317416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.317449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.317475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.317509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.317541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.317571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.317605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.317640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.317672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.317709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.317738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.317785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.317817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.317854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.317886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.317920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.317950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.317987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.318026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.318059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.318087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.318118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.318147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.318174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.318205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.318235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.318271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.318504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.318537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.318569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.318601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.318627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.318655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.318687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.318719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.318759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.318783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.318808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.318832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.318856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.318885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.318917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.318947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.318981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.319018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.319058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.319090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.319150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.319179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.319223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.319256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.319289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.319318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.319352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.319384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.319413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.319442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.319475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.319507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.319539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.319571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.319601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.319637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.319671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.319709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.319739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.319772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.319803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.319833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.319862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.319896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.319928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.319962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.319996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.320034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.320067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.320115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.285 [2024-12-07 05:29:13.320147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.320176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.320212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.320248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.320275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.320309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.320341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.320370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.320403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.320436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.320461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.320484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.320512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.320765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.320791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.320817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.320845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.320874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.320906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.320942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.320968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.320993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.321990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.322024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.322056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.322093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.322120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.322155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.322183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.322214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.322244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.322275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.322309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.322338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.322397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.322427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.322483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.322517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.322549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.322787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.322824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.322857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.322886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.322918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.322954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.322983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.323008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.323037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.323061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.323088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.323111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.323134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.323159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.323182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.323206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.323230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.323633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.323664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.323694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.323728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.286 [2024-12-07 05:29:13.323760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.323797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.323829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.323864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.323896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.323934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.323969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.324013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.324043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.324108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.324136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.324170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.324206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.324250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.324280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.324309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.324341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.324378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.324406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.324434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.324467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.324503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.324542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.324574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.324603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.324630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.324656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.324690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.324721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.324754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.324785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.324813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.324846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.324884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.324912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.324944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.324975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.325009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.325043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.325076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.325112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.325136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.325256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.325285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.325315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.325346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.325372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.325404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.325434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.325458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.325490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.325522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.325549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.325574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.325600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.325665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.325696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.325729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.325759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.325792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.325823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.325854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.325886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.325914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.325946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.325975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.326003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.326040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.326076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.326109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.326143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.326175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.326213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.326244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.326275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.326302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.326332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.326365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.326395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.326425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.326449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.326478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.326514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.326541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.326572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.326612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.326636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.287 [2024-12-07 05:29:13.326662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.326690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.326718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.326753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.326781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.326817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.326846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.326876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.326906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.326940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.326971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.327006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.327040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.327105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.327137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.327169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.327199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.327233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.327263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.327467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.327499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.327531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.327557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.327580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.327610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.327642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.327670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.327695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.327723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.327746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.327776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.327804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.327830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.327855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.327885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.327917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.327943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.327969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.327993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.328029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.328067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.328099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.328133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.328162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.328195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.328226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.328256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.328287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.328316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.328347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.328378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.328410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.328442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.328473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.328501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.328535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.328566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.328601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.328632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.328667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.328695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.328719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.328743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.328772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.328800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.328835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.328867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.328900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.328931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.328964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.328991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.329027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.329053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.329077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.329109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.329141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.329167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.329197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.329227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.329251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.329282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.329309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.329567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.329599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.329636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.329670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.329705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.329734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.329767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.329800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.329828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.329875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.329904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.329933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.329968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.329998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.330035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.330068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.288 [2024-12-07 05:29:13.330105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.330136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.330170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.330200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.330231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.330255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.330280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.330304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.330328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.330351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.330375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.330399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.330422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.330445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.330468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.330492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.330515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.330539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.330562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.330586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.330609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.330632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.330656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.330679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.330703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.330726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.330749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.330773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.330797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.330821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.330849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.330881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.330914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.330953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.330984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.331020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.331051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.331081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.331116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.331153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.331181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.331213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.331241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.331304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.331338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.331400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.331430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.331460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:10.289 [2024-12-07 05:29:13.331707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:11.232 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:11.232 05:29:14 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:11.232 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:11.232 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:11.232 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:11.232 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:11.493 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:11.493 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:11.493 05:29:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:14:11.493 05:29:14 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:14:11.493 true 00:14:11.493 05:29:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:11.493 05:29:14 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.436 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:12.436 05:29:15 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.695 05:29:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:14:12.695 05:29:15 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:14:12.695 true 00:14:12.695 05:29:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:12.695 05:29:15 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.954 05:29:16 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:13.214 05:29:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:14:13.214 05:29:16 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:14:13.214 true 00:14:13.214 05:29:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:13.214 05:29:16 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.474 05:29:16 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:13.734 05:29:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:14:13.734 05:29:16 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:14:13.734 true 00:14:13.734 05:29:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:13.734 05:29:16 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.995 05:29:17 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.254 05:29:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:14:14.254 05:29:17 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:14:14.254 true 00:14:14.254 05:29:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:14.254 05:29:17 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.515 05:29:17 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.775 05:29:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:14:14.775 05:29:17 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:14:14.775 true 00:14:14.775 05:29:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:14.775 05:29:17 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.035 05:29:18 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:15.035 05:29:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:14:15.035 05:29:18 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:14:15.295 true 00:14:15.295 05:29:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:15.295 05:29:18 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.554 05:29:18 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:15.814 05:29:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:14:15.814 05:29:18 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:14:15.814 true 00:14:15.814 05:29:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:15.814 05:29:18 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.073 05:29:19 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:16.073 05:29:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:14:16.073 05:29:19 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:14:16.333 true 00:14:16.333 05:29:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:16.333 05:29:19 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.593 05:29:19 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:16.593 05:29:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:14:16.593 05:29:19 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:14:16.854 true 00:14:16.854 05:29:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:16.854 05:29:19 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.114 05:29:20 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:17.114 05:29:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:14:17.114 05:29:20 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:14:17.375 true 00:14:17.375 05:29:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:17.375 05:29:20 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.636 05:29:20 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:17.636 05:29:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:14:17.636 05:29:20 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:14:17.897 true 00:14:17.897 05:29:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:17.897 05:29:20 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:18.156 05:29:21 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:18.156 05:29:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:14:18.156 05:29:21 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:14:18.415 true 00:14:18.415 05:29:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:18.415 05:29:21 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:18.415 05:29:21 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:18.686 05:29:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:14:18.686 05:29:21 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:14:18.945 true 00:14:18.945 05:29:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:18.945 05:29:21 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:18.945 05:29:22 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:19.205 05:29:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:14:19.205 05:29:22 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:14:19.465 true 00:14:19.465 05:29:22 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:19.465 05:29:22 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.465 05:29:22 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:19.724 05:29:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:14:19.725 05:29:22 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:14:19.984 true 00:14:19.984 05:29:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:19.984 05:29:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.984 05:29:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:20.243 05:29:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:14:20.243 05:29:23 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:14:20.502 true 00:14:20.502 05:29:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:20.502 05:29:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.502 05:29:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:20.760 05:29:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:14:20.760 05:29:23 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:14:21.019 true 00:14:21.019 05:29:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:21.019 05:29:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.019 05:29:24 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:21.279 05:29:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:14:21.279 05:29:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:14:21.539 true 00:14:21.539 05:29:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:21.539 05:29:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.539 05:29:24 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:21.798 05:29:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:14:21.798 05:29:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:14:21.798 Initializing NVMe Controllers 00:14:21.798 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:21.798 Controller IO queue size 128, less than required. 00:14:21.798 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:21.798 Controller IO queue size 128, less than required. 00:14:21.798 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:21.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:21.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:21.798 Initialization complete. Launching workers. 00:14:21.798 ======================================================== 00:14:21.798 Latency(us) 00:14:21.798 Device Information : IOPS MiB/s Average min max 00:14:21.798 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1331.92 0.65 14292.23 1717.93 1106636.57 00:14:21.798 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4653.81 2.27 27415.96 1461.58 407354.84 00:14:21.798 ======================================================== 00:14:21.798 Total : 5985.73 2.92 24495.73 1461.58 1106636.57 00:14:21.798 00:14:21.798 true 00:14:22.058 05:29:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1737679 00:14:22.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1737679) - No such process 00:14:22.058 05:29:25 -- target/ns_hotplug_stress.sh@53 -- # wait 1737679 00:14:22.058 05:29:25 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:22.058 05:29:25 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:22.318 05:29:25 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:22.318 05:29:25 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:22.318 05:29:25 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:22.318 05:29:25 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:22.318 05:29:25 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:22.318 null0 00:14:22.577 05:29:25 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:22.577 05:29:25 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:22.577 05:29:25 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:22.577 null1 00:14:22.577 05:29:25 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:22.577 05:29:25 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:22.577 05:29:25 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:22.836 null2 00:14:22.836 05:29:25 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:22.836 05:29:25 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:22.836 05:29:25 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:22.836 null3 00:14:23.096 05:29:26 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:23.096 05:29:26 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:23.096 05:29:26 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:23.096 null4 00:14:23.096 05:29:26 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:23.096 05:29:26 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:23.096 05:29:26 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:23.356 null5 00:14:23.356 05:29:26 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:23.356 05:29:26 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:23.356 05:29:26 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:23.356 null6 00:14:23.356 05:29:26 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:23.356 05:29:26 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:23.356 05:29:26 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:23.618 null7 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@66 -- # wait 1744142 1744144 1744147 1744150 1744153 1744156 1744158 1744162 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.618 05:29:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:23.879 05:29:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.879 05:29:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:23.879 05:29:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:23.879 05:29:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:23.879 05:29:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:23.879 05:29:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:23.879 05:29:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:23.880 05:29:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:24.141 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.141 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.141 05:29:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:24.141 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.141 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.142 05:29:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:24.142 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.142 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.142 05:29:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:24.142 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.142 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.142 05:29:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:24.142 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.142 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.142 05:29:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:24.142 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.142 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.142 05:29:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:24.142 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.142 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.142 05:29:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:24.142 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.142 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.142 05:29:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:24.142 05:29:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.142 05:29:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:24.142 05:29:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:24.142 05:29:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:24.142 05:29:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:24.142 05:29:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:24.403 05:29:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:24.403 05:29:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:24.403 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.403 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.403 05:29:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:24.403 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.403 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.403 05:29:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:24.403 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.403 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.403 05:29:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:24.403 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.403 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.403 05:29:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:24.403 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.403 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.403 05:29:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:24.403 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.403 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.403 05:29:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:24.403 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.403 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.403 05:29:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:24.403 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.403 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.403 05:29:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:24.403 05:29:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.664 05:29:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:24.664 05:29:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:24.664 05:29:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:24.664 05:29:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:24.664 05:29:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:24.664 05:29:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:24.664 05:29:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:24.664 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.664 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.664 05:29:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:24.664 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.664 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.664 05:29:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:24.664 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.664 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.664 05:29:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:24.664 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.664 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.664 05:29:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:24.664 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.664 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.664 05:29:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:24.664 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.664 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.664 05:29:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:24.925 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.925 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.925 05:29:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:24.925 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.925 05:29:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.925 05:29:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:24.925 05:29:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.925 05:29:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:24.925 05:29:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:24.925 05:29:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:24.925 05:29:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:24.925 05:29:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:24.925 05:29:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:24.925 05:29:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:24.925 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.925 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.925 05:29:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:25.187 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.187 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.187 05:29:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:25.187 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.187 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.187 05:29:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:25.187 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.187 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.187 05:29:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:25.187 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.187 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.187 05:29:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:25.187 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.187 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.187 05:29:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:25.187 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.187 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.187 05:29:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:25.187 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.187 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.187 05:29:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:25.187 05:29:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.187 05:29:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:25.187 05:29:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:25.187 05:29:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:25.187 05:29:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:25.187 05:29:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:25.187 05:29:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:25.450 05:29:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:25.450 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.450 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.450 05:29:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:25.450 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.450 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.450 05:29:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:25.450 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.450 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.450 05:29:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:25.450 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.450 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.450 05:29:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:25.450 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.450 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.450 05:29:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:25.450 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.450 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.450 05:29:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:25.450 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.450 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.450 05:29:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:25.450 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.450 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.450 05:29:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:25.450 05:29:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.712 05:29:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:25.712 05:29:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:25.712 05:29:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:25.712 05:29:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:25.712 05:29:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:25.712 05:29:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:25.712 05:29:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:25.712 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.712 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.712 05:29:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:25.712 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.712 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.712 05:29:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:25.712 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.712 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.713 05:29:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:25.713 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.713 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.713 05:29:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:25.990 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.990 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.990 05:29:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:25.990 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.990 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.990 05:29:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:25.990 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.990 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.990 05:29:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:25.990 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.990 05:29:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.990 05:29:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:25.990 05:29:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:25.990 05:29:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.990 05:29:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:25.990 05:29:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:25.990 05:29:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:25.990 05:29:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:25.990 05:29:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:25.990 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.990 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.990 05:29:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:25.990 05:29:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:25.990 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.990 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.990 05:29:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:26.281 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.281 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.281 05:29:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:26.281 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.281 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.281 05:29:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:26.281 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.281 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.281 05:29:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:26.281 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.281 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.281 05:29:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:26.281 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.281 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.281 05:29:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:26.281 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.281 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.281 05:29:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:26.281 05:29:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:26.281 05:29:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.281 05:29:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:26.281 05:29:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:26.281 05:29:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:26.281 05:29:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:26.281 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.281 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.281 05:29:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:26.281 05:29:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:26.545 05:29:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:26.545 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.545 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.545 05:29:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:26.545 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.545 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.545 05:29:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:26.545 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.545 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.545 05:29:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:26.545 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.545 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.545 05:29:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:26.545 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.545 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.545 05:29:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:26.545 05:29:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:26.545 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.545 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.545 05:29:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:26.545 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.545 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.545 05:29:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:26.545 05:29:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.545 05:29:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:26.545 05:29:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:26.807 05:29:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:26.807 05:29:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:26.807 05:29:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:26.807 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.807 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.807 05:29:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:26.807 05:29:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:26.807 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.807 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.807 05:29:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:26.807 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.807 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.807 05:29:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:26.807 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.807 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.807 05:29:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:26.807 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.807 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.807 05:29:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:26.807 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.807 05:29:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.807 05:29:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:26.807 05:29:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.807 05:29:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.807 05:29:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:26.807 05:29:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.807 05:29:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.807 05:29:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:27.068 05:29:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:27.068 05:29:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.068 05:29:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:27.068 05:29:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:27.068 05:29:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:27.068 05:29:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:27.068 05:29:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:27.068 05:29:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:27.068 05:29:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.068 05:29:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.068 05:29:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.068 05:29:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.068 05:29:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.068 05:29:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.068 05:29:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.068 05:29:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.068 05:29:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.068 05:29:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.331 05:29:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.331 05:29:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.331 05:29:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.331 05:29:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.331 05:29:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.331 05:29:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.331 05:29:30 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:27.331 05:29:30 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:27.331 05:29:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:27.331 05:29:30 -- nvmf/common.sh@116 -- # sync 00:14:27.331 05:29:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:27.331 05:29:30 -- nvmf/common.sh@119 -- # set +e 00:14:27.331 05:29:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:27.331 05:29:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:27.331 rmmod nvme_tcp 00:14:27.331 rmmod nvme_fabrics 00:14:27.331 rmmod nvme_keyring 00:14:27.331 05:29:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:27.331 05:29:30 -- nvmf/common.sh@123 -- # set -e 00:14:27.331 05:29:30 -- nvmf/common.sh@124 -- # return 0 00:14:27.331 05:29:30 -- nvmf/common.sh@477 -- # '[' -n 1737011 ']' 00:14:27.331 05:29:30 -- nvmf/common.sh@478 -- # killprocess 1737011 00:14:27.331 05:29:30 -- common/autotest_common.sh@936 -- # '[' -z 1737011 ']' 00:14:27.331 05:29:30 -- common/autotest_common.sh@940 -- # kill -0 1737011 00:14:27.331 05:29:30 -- common/autotest_common.sh@941 -- # uname 00:14:27.331 05:29:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:27.331 05:29:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1737011 00:14:27.331 05:29:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:27.331 05:29:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:27.331 05:29:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1737011' 00:14:27.331 killing process with pid 1737011 00:14:27.331 05:29:30 -- common/autotest_common.sh@955 -- # kill 1737011 00:14:27.331 05:29:30 -- common/autotest_common.sh@960 -- # wait 1737011 00:14:27.593 05:29:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:27.593 05:29:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:27.593 05:29:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:27.593 05:29:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:27.593 05:29:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:27.593 05:29:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.593 05:29:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:27.593 05:29:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.508 05:29:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:29.508 00:14:29.508 real 0m48.224s 00:14:29.508 user 3m14.828s 00:14:29.508 sys 0m15.626s 00:14:29.508 05:29:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:29.508 05:29:32 -- common/autotest_common.sh@10 -- # set +x 00:14:29.508 ************************************ 00:14:29.508 END TEST nvmf_ns_hotplug_stress 00:14:29.508 ************************************ 00:14:29.770 05:29:32 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:29.770 05:29:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:29.770 05:29:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:29.770 05:29:32 -- common/autotest_common.sh@10 -- # set +x 00:14:29.770 ************************************ 00:14:29.770 START TEST nvmf_connect_stress 00:14:29.770 ************************************ 00:14:29.770 05:29:32 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:29.770 * Looking for test storage... 00:14:29.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:29.770 05:29:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:29.770 05:29:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:29.770 05:29:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:29.770 05:29:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:29.770 05:29:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:29.770 05:29:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:29.770 05:29:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:29.770 05:29:32 -- scripts/common.sh@335 -- # IFS=.-: 00:14:29.770 05:29:32 -- scripts/common.sh@335 -- # read -ra ver1 00:14:29.770 05:29:32 -- scripts/common.sh@336 -- # IFS=.-: 00:14:29.770 05:29:32 -- scripts/common.sh@336 -- # read -ra ver2 00:14:29.770 05:29:32 -- scripts/common.sh@337 -- # local 'op=<' 00:14:29.770 05:29:32 -- scripts/common.sh@339 -- # ver1_l=2 00:14:29.770 05:29:32 -- scripts/common.sh@340 -- # ver2_l=1 00:14:29.770 05:29:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:29.770 05:29:32 -- scripts/common.sh@343 -- # case "$op" in 00:14:29.770 05:29:32 -- scripts/common.sh@344 -- # : 1 00:14:29.770 05:29:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:29.770 05:29:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:29.770 05:29:32 -- scripts/common.sh@364 -- # decimal 1 00:14:29.770 05:29:32 -- scripts/common.sh@352 -- # local d=1 00:14:29.770 05:29:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:29.770 05:29:32 -- scripts/common.sh@354 -- # echo 1 00:14:29.770 05:29:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:29.770 05:29:32 -- scripts/common.sh@365 -- # decimal 2 00:14:29.770 05:29:32 -- scripts/common.sh@352 -- # local d=2 00:14:29.770 05:29:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:29.770 05:29:32 -- scripts/common.sh@354 -- # echo 2 00:14:29.770 05:29:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:29.770 05:29:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:29.770 05:29:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:29.770 05:29:32 -- scripts/common.sh@367 -- # return 0 00:14:29.770 05:29:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:29.770 05:29:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:29.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.770 --rc genhtml_branch_coverage=1 00:14:29.770 --rc genhtml_function_coverage=1 00:14:29.770 --rc genhtml_legend=1 00:14:29.770 --rc geninfo_all_blocks=1 00:14:29.770 --rc geninfo_unexecuted_blocks=1 00:14:29.770 00:14:29.770 ' 00:14:29.770 05:29:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:29.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.770 --rc genhtml_branch_coverage=1 00:14:29.770 --rc genhtml_function_coverage=1 00:14:29.770 --rc genhtml_legend=1 00:14:29.770 --rc geninfo_all_blocks=1 00:14:29.770 --rc geninfo_unexecuted_blocks=1 00:14:29.770 00:14:29.770 ' 00:14:29.770 05:29:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:29.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.770 --rc genhtml_branch_coverage=1 00:14:29.770 --rc genhtml_function_coverage=1 00:14:29.770 --rc genhtml_legend=1 00:14:29.770 --rc geninfo_all_blocks=1 00:14:29.770 --rc geninfo_unexecuted_blocks=1 00:14:29.770 00:14:29.770 ' 00:14:29.770 05:29:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:29.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.770 --rc genhtml_branch_coverage=1 00:14:29.770 --rc genhtml_function_coverage=1 00:14:29.770 --rc genhtml_legend=1 00:14:29.770 --rc geninfo_all_blocks=1 00:14:29.770 --rc geninfo_unexecuted_blocks=1 00:14:29.770 00:14:29.770 ' 00:14:29.770 05:29:32 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:29.770 05:29:32 -- nvmf/common.sh@7 -- # uname -s 00:14:29.770 05:29:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:29.770 05:29:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:29.770 05:29:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:29.770 05:29:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:29.770 05:29:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:29.770 05:29:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:29.770 05:29:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:29.770 05:29:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:29.770 05:29:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:29.770 05:29:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:29.770 05:29:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:29.770 05:29:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:29.770 05:29:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:29.770 05:29:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:29.770 05:29:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:29.770 05:29:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:29.770 05:29:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:29.770 05:29:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:29.770 05:29:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:29.770 05:29:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.770 05:29:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.770 05:29:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.770 05:29:32 -- paths/export.sh@5 -- # export PATH 00:14:29.770 05:29:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.770 05:29:32 -- nvmf/common.sh@46 -- # : 0 00:14:29.770 05:29:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:29.770 05:29:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:29.770 05:29:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:29.770 05:29:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:29.770 05:29:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:29.770 05:29:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:29.770 05:29:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:29.770 05:29:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:29.770 05:29:32 -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:29.770 05:29:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:29.770 05:29:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:29.770 05:29:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:29.770 05:29:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:29.770 05:29:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:29.770 05:29:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.770 05:29:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:29.770 05:29:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.770 05:29:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:29.771 05:29:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:29.771 05:29:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:29.771 05:29:32 -- common/autotest_common.sh@10 -- # set +x 00:14:37.917 05:29:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:37.917 05:29:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:37.917 05:29:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:37.917 05:29:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:37.917 05:29:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:37.917 05:29:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:37.917 05:29:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:37.917 05:29:40 -- nvmf/common.sh@294 -- # net_devs=() 00:14:37.917 05:29:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:37.917 05:29:40 -- nvmf/common.sh@295 -- # e810=() 00:14:37.917 05:29:40 -- nvmf/common.sh@295 -- # local -ga e810 00:14:37.917 05:29:40 -- nvmf/common.sh@296 -- # x722=() 00:14:37.917 05:29:40 -- nvmf/common.sh@296 -- # local -ga x722 00:14:37.917 05:29:40 -- nvmf/common.sh@297 -- # mlx=() 00:14:37.917 05:29:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:37.917 05:29:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:37.917 05:29:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:37.917 05:29:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:37.917 05:29:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:37.917 05:29:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:37.917 05:29:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:37.917 05:29:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:37.917 05:29:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:37.917 05:29:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:37.917 05:29:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:37.917 05:29:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:37.917 05:29:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:37.917 05:29:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:37.917 05:29:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:37.917 05:29:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:37.917 05:29:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:37.917 05:29:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:37.917 05:29:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:37.917 05:29:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:37.917 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:37.917 05:29:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:37.917 05:29:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:37.917 05:29:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.917 05:29:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.917 05:29:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:37.917 05:29:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:37.917 05:29:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:37.917 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:37.917 05:29:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:37.917 05:29:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:37.917 05:29:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.917 05:29:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.917 05:29:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:37.917 05:29:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:37.917 05:29:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:37.917 05:29:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:37.917 05:29:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:37.918 05:29:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.918 05:29:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:37.918 05:29:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.918 05:29:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:37.918 Found net devices under 0000:31:00.0: cvl_0_0 00:14:37.918 05:29:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.918 05:29:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:37.918 05:29:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.918 05:29:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:37.918 05:29:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.918 05:29:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:37.918 Found net devices under 0000:31:00.1: cvl_0_1 00:14:37.918 05:29:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.918 05:29:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:37.918 05:29:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:37.918 05:29:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:37.918 05:29:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:37.918 05:29:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:37.918 05:29:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:37.918 05:29:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:37.918 05:29:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:37.918 05:29:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:37.918 05:29:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:37.918 05:29:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:37.918 05:29:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:37.918 05:29:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:37.918 05:29:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:37.918 05:29:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:37.918 05:29:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:37.918 05:29:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:37.918 05:29:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:37.918 05:29:40 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:37.918 05:29:40 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:37.918 05:29:40 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:37.918 05:29:40 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:37.918 05:29:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:37.918 05:29:40 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:37.918 05:29:40 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:37.918 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:37.918 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:14:37.918 00:14:37.918 --- 10.0.0.2 ping statistics --- 00:14:37.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.918 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:14:37.918 05:29:40 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:37.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:37.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:14:37.918 00:14:37.918 --- 10.0.0.1 ping statistics --- 00:14:37.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.918 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:14:37.918 05:29:40 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:37.918 05:29:40 -- nvmf/common.sh@410 -- # return 0 00:14:37.918 05:29:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:37.918 05:29:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:37.918 05:29:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:37.918 05:29:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:37.918 05:29:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:37.918 05:29:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:37.918 05:29:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:37.918 05:29:40 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:37.918 05:29:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:37.918 05:29:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:37.918 05:29:40 -- common/autotest_common.sh@10 -- # set +x 00:14:37.918 05:29:40 -- nvmf/common.sh@469 -- # nvmfpid=1749233 00:14:37.918 05:29:40 -- nvmf/common.sh@470 -- # waitforlisten 1749233 00:14:37.918 05:29:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:37.918 05:29:40 -- common/autotest_common.sh@829 -- # '[' -z 1749233 ']' 00:14:37.918 05:29:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.918 05:29:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:37.918 05:29:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.918 05:29:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:37.918 05:29:40 -- common/autotest_common.sh@10 -- # set +x 00:14:37.918 [2024-12-07 05:29:40.456609] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:37.918 [2024-12-07 05:29:40.456675] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:37.918 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.918 [2024-12-07 05:29:40.549506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:37.918 [2024-12-07 05:29:40.641357] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:37.918 [2024-12-07 05:29:40.641536] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:37.918 [2024-12-07 05:29:40.641547] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:37.918 [2024-12-07 05:29:40.641555] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:37.918 [2024-12-07 05:29:40.641711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:37.918 [2024-12-07 05:29:40.641877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.918 [2024-12-07 05:29:40.641878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:38.180 05:29:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:38.180 05:29:41 -- common/autotest_common.sh@862 -- # return 0 00:14:38.180 05:29:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:38.180 05:29:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:38.180 05:29:41 -- common/autotest_common.sh@10 -- # set +x 00:14:38.180 05:29:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:38.180 05:29:41 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:38.180 05:29:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.180 05:29:41 -- common/autotest_common.sh@10 -- # set +x 00:14:38.180 [2024-12-07 05:29:41.303481] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:38.180 05:29:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.180 05:29:41 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:38.180 05:29:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.180 05:29:41 -- common/autotest_common.sh@10 -- # set +x 00:14:38.180 05:29:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.180 05:29:41 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:38.180 05:29:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.180 05:29:41 -- common/autotest_common.sh@10 -- # set +x 00:14:38.180 [2024-12-07 05:29:41.327933] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:38.180 05:29:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.180 05:29:41 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:38.180 05:29:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.180 05:29:41 -- common/autotest_common.sh@10 -- # set +x 00:14:38.180 NULL1 00:14:38.180 05:29:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.180 05:29:41 -- target/connect_stress.sh@21 -- # PERF_PID=1749558 00:14:38.180 05:29:41 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:38.180 05:29:41 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:38.180 05:29:41 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:38.180 05:29:41 -- target/connect_stress.sh@27 -- # seq 1 20 00:14:38.180 05:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.180 05:29:41 -- target/connect_stress.sh@28 -- # cat 00:14:38.180 05:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.180 05:29:41 -- target/connect_stress.sh@28 -- # cat 00:14:38.180 05:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.180 05:29:41 -- target/connect_stress.sh@28 -- # cat 00:14:38.180 05:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.180 05:29:41 -- target/connect_stress.sh@28 -- # cat 00:14:38.180 05:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.180 05:29:41 -- target/connect_stress.sh@28 -- # cat 00:14:38.180 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.180 05:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.180 05:29:41 -- target/connect_stress.sh@28 -- # cat 00:14:38.180 05:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.180 05:29:41 -- target/connect_stress.sh@28 -- # cat 00:14:38.180 05:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.180 05:29:41 -- target/connect_stress.sh@28 -- # cat 00:14:38.180 05:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.180 05:29:41 -- target/connect_stress.sh@28 -- # cat 00:14:38.180 05:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.180 05:29:41 -- target/connect_stress.sh@28 -- # cat 00:14:38.180 05:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.180 05:29:41 -- target/connect_stress.sh@28 -- # cat 00:14:38.180 05:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.180 05:29:41 -- target/connect_stress.sh@28 -- # cat 00:14:38.441 05:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.441 05:29:41 -- target/connect_stress.sh@28 -- # cat 00:14:38.441 05:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.441 05:29:41 -- target/connect_stress.sh@28 -- # cat 00:14:38.441 05:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.441 05:29:41 -- target/connect_stress.sh@28 -- # cat 00:14:38.441 05:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.441 05:29:41 -- target/connect_stress.sh@28 -- # cat 00:14:38.441 05:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.441 05:29:41 -- target/connect_stress.sh@28 -- # cat 00:14:38.441 05:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.441 05:29:41 -- target/connect_stress.sh@28 -- # cat 00:14:38.441 05:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.441 05:29:41 -- target/connect_stress.sh@28 -- # cat 00:14:38.441 05:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.441 05:29:41 -- target/connect_stress.sh@28 -- # cat 00:14:38.441 05:29:41 -- target/connect_stress.sh@34 -- # kill -0 1749558 00:14:38.441 05:29:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.441 05:29:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.441 05:29:41 -- common/autotest_common.sh@10 -- # set +x 00:14:38.702 05:29:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.702 05:29:41 -- target/connect_stress.sh@34 -- # kill -0 1749558 00:14:38.702 05:29:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.702 05:29:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.702 05:29:41 -- common/autotest_common.sh@10 -- # set +x 00:14:38.962 05:29:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.962 05:29:42 -- target/connect_stress.sh@34 -- # kill -0 1749558 00:14:38.962 05:29:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.962 05:29:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.962 05:29:42 -- common/autotest_common.sh@10 -- # set +x 00:14:39.222 05:29:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.222 05:29:42 -- target/connect_stress.sh@34 -- # kill -0 1749558 00:14:39.222 05:29:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.222 05:29:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.222 05:29:42 -- common/autotest_common.sh@10 -- # set +x 00:14:39.793 05:29:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.793 05:29:42 -- target/connect_stress.sh@34 -- # kill -0 1749558 00:14:39.793 05:29:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.793 05:29:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.793 05:29:42 -- common/autotest_common.sh@10 -- # set +x 00:14:40.053 05:29:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.053 05:29:43 -- target/connect_stress.sh@34 -- # kill -0 1749558 00:14:40.053 05:29:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.053 05:29:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.053 05:29:43 -- common/autotest_common.sh@10 -- # set +x 00:14:40.312 05:29:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.312 05:29:43 -- target/connect_stress.sh@34 -- # kill -0 1749558 00:14:40.312 05:29:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.312 05:29:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.312 05:29:43 -- common/autotest_common.sh@10 -- # set +x 00:14:40.571 05:29:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.571 05:29:43 -- target/connect_stress.sh@34 -- # kill -0 1749558 00:14:40.571 05:29:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.571 05:29:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.571 05:29:43 -- common/autotest_common.sh@10 -- # set +x 00:14:40.831 05:29:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.831 05:29:44 -- target/connect_stress.sh@34 -- # kill -0 1749558 00:14:40.831 05:29:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.831 05:29:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.831 05:29:44 -- common/autotest_common.sh@10 -- # set +x 00:14:41.421 05:29:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.421 05:29:44 -- target/connect_stress.sh@34 -- # kill -0 1749558 00:14:41.421 05:29:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.421 05:29:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.421 05:29:44 -- common/autotest_common.sh@10 -- # set +x 00:14:41.680 05:29:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.680 05:29:44 -- target/connect_stress.sh@34 -- # kill -0 1749558 00:14:41.680 05:29:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.680 05:29:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.680 05:29:44 -- common/autotest_common.sh@10 -- # set +x 00:14:41.940 05:29:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.940 05:29:45 -- target/connect_stress.sh@34 -- # kill -0 1749558 00:14:41.940 05:29:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.940 05:29:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.940 05:29:45 -- common/autotest_common.sh@10 -- # set +x 00:14:42.199 05:29:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.199 05:29:45 -- target/connect_stress.sh@34 -- # kill -0 1749558 00:14:42.199 05:29:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.199 05:29:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.200 05:29:45 -- common/autotest_common.sh@10 -- # set +x 00:14:42.459 05:29:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.459 05:29:45 -- target/connect_stress.sh@34 -- # kill -0 1749558 00:14:42.459 05:29:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.459 05:29:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.459 05:29:45 -- common/autotest_common.sh@10 -- # set +x 00:14:43.029 05:29:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.029 05:29:46 -- target/connect_stress.sh@34 -- # kill -0 1749558 00:14:43.029 05:29:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.029 05:29:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.029 05:29:46 -- common/autotest_common.sh@10 -- # set +x 00:14:43.288 05:29:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.288 05:29:46 -- target/connect_stress.sh@34 -- # kill -0 1749558 00:14:43.288 05:29:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.288 05:29:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.288 05:29:46 -- common/autotest_common.sh@10 -- # set +x 00:14:43.549 05:29:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.549 05:29:46 -- target/connect_stress.sh@34 -- # kill -0 1749558 00:14:43.549 05:29:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.549 05:29:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.549 05:29:46 -- common/autotest_common.sh@10 -- # set +x 00:14:43.809 05:29:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.809 05:29:46 -- target/connect_stress.sh@34 -- # kill -0 1749558 00:14:43.809 05:29:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.809 05:29:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.809 05:29:46 -- common/autotest_common.sh@10 -- # set +x 00:14:44.069 05:29:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.069 05:29:47 -- target/connect_stress.sh@34 -- # kill -0 1749558 00:14:44.069 05:29:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.069 05:29:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.069 05:29:47 -- common/autotest_common.sh@10 -- # set +x 00:14:44.640 05:29:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.640 05:29:47 -- target/connect_stress.sh@34 -- # kill -0 1749558 00:14:44.640 05:29:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.640 05:29:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.640 05:29:47 -- common/autotest_common.sh@10 -- # set +x 00:14:44.900 05:29:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.900 05:29:47 -- target/connect_stress.sh@34 -- # kill -0 1749558 00:14:44.900 05:29:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.900 05:29:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.900 05:29:47 -- common/autotest_common.sh@10 -- # set +x 00:14:45.160 05:29:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.160 05:29:48 -- target/connect_stress.sh@34 -- # kill -0 1749558 00:14:45.160 05:29:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.160 05:29:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.160 05:29:48 -- common/autotest_common.sh@10 -- # set +x 00:14:45.441 05:29:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.441 05:29:48 -- target/connect_stress.sh@34 -- # kill -0 1749558 00:14:45.441 05:29:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.441 05:29:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.441 05:29:48 -- common/autotest_common.sh@10 -- # set +x 00:14:45.703 05:29:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.703 05:29:48 -- target/connect_stress.sh@34 -- # kill -0 1749558 00:14:45.703 05:29:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.703 05:29:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.703 05:29:48 -- common/autotest_common.sh@10 -- # set +x 00:14:46.274 05:29:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.274 05:29:49 -- target/connect_stress.sh@34 -- # kill -0 1749558 00:14:46.274 05:29:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.274 05:29:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.274 05:29:49 -- common/autotest_common.sh@10 -- # set +x 00:14:46.536 05:29:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.536 05:29:49 -- target/connect_stress.sh@34 -- # kill -0 1749558 00:14:46.536 05:29:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.536 05:29:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.536 05:29:49 -- common/autotest_common.sh@10 -- # set +x 00:14:46.797 05:29:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.797 05:29:49 -- target/connect_stress.sh@34 -- # kill -0 1749558 00:14:46.797 05:29:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.797 05:29:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.797 05:29:49 -- common/autotest_common.sh@10 -- # set +x 00:14:47.059 05:29:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.059 05:29:50 -- target/connect_stress.sh@34 -- # kill -0 1749558 00:14:47.059 05:29:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.059 05:29:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.059 05:29:50 -- common/autotest_common.sh@10 -- # set +x 00:14:47.630 05:29:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.630 05:29:50 -- target/connect_stress.sh@34 -- # kill -0 1749558 00:14:47.630 05:29:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.630 05:29:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.630 05:29:50 -- common/autotest_common.sh@10 -- # set +x 00:14:47.891 05:29:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.891 05:29:50 -- target/connect_stress.sh@34 -- # kill -0 1749558 00:14:47.891 05:29:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.891 05:29:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.891 05:29:50 -- common/autotest_common.sh@10 -- # set +x 00:14:48.152 05:29:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.152 05:29:51 -- target/connect_stress.sh@34 -- # kill -0 1749558 00:14:48.152 05:29:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.152 05:29:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.152 05:29:51 -- common/autotest_common.sh@10 -- # set +x 00:14:48.413 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:48.414 05:29:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.414 05:29:51 -- target/connect_stress.sh@34 -- # kill -0 1749558 00:14:48.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1749558) - No such process 00:14:48.414 05:29:51 -- target/connect_stress.sh@38 -- # wait 1749558 00:14:48.414 05:29:51 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:48.414 05:29:51 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:48.414 05:29:51 -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:48.414 05:29:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:48.414 05:29:51 -- nvmf/common.sh@116 -- # sync 00:14:48.414 05:29:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:48.414 05:29:51 -- nvmf/common.sh@119 -- # set +e 00:14:48.414 05:29:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:48.414 05:29:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:48.414 rmmod nvme_tcp 00:14:48.414 rmmod nvme_fabrics 00:14:48.414 rmmod nvme_keyring 00:14:48.414 05:29:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:48.414 05:29:51 -- nvmf/common.sh@123 -- # set -e 00:14:48.414 05:29:51 -- nvmf/common.sh@124 -- # return 0 00:14:48.414 05:29:51 -- nvmf/common.sh@477 -- # '[' -n 1749233 ']' 00:14:48.414 05:29:51 -- nvmf/common.sh@478 -- # killprocess 1749233 00:14:48.414 05:29:51 -- common/autotest_common.sh@936 -- # '[' -z 1749233 ']' 00:14:48.414 05:29:51 -- common/autotest_common.sh@940 -- # kill -0 1749233 00:14:48.414 05:29:51 -- common/autotest_common.sh@941 -- # uname 00:14:48.414 05:29:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:48.414 05:29:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1749233 00:14:48.675 05:29:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:48.675 05:29:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:48.675 05:29:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1749233' 00:14:48.675 killing process with pid 1749233 00:14:48.675 05:29:51 -- common/autotest_common.sh@955 -- # kill 1749233 00:14:48.675 05:29:51 -- common/autotest_common.sh@960 -- # wait 1749233 00:14:48.675 05:29:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:48.675 05:29:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:48.675 05:29:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:48.675 05:29:51 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:48.675 05:29:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:48.675 05:29:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.675 05:29:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:48.675 05:29:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.225 05:29:53 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:51.225 00:14:51.225 real 0m21.094s 00:14:51.225 user 0m42.053s 00:14:51.225 sys 0m9.064s 00:14:51.225 05:29:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:51.225 05:29:53 -- common/autotest_common.sh@10 -- # set +x 00:14:51.225 ************************************ 00:14:51.225 END TEST nvmf_connect_stress 00:14:51.225 ************************************ 00:14:51.225 05:29:53 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:51.225 05:29:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:51.225 05:29:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:51.225 05:29:53 -- common/autotest_common.sh@10 -- # set +x 00:14:51.225 ************************************ 00:14:51.225 START TEST nvmf_fused_ordering 00:14:51.225 ************************************ 00:14:51.225 05:29:53 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:51.225 * Looking for test storage... 00:14:51.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:51.225 05:29:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:51.225 05:29:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:51.225 05:29:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:51.225 05:29:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:51.225 05:29:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:51.225 05:29:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:51.225 05:29:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:51.225 05:29:54 -- scripts/common.sh@335 -- # IFS=.-: 00:14:51.225 05:29:54 -- scripts/common.sh@335 -- # read -ra ver1 00:14:51.225 05:29:54 -- scripts/common.sh@336 -- # IFS=.-: 00:14:51.225 05:29:54 -- scripts/common.sh@336 -- # read -ra ver2 00:14:51.225 05:29:54 -- scripts/common.sh@337 -- # local 'op=<' 00:14:51.225 05:29:54 -- scripts/common.sh@339 -- # ver1_l=2 00:14:51.225 05:29:54 -- scripts/common.sh@340 -- # ver2_l=1 00:14:51.225 05:29:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:51.225 05:29:54 -- scripts/common.sh@343 -- # case "$op" in 00:14:51.225 05:29:54 -- scripts/common.sh@344 -- # : 1 00:14:51.225 05:29:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:51.225 05:29:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:51.225 05:29:54 -- scripts/common.sh@364 -- # decimal 1 00:14:51.225 05:29:54 -- scripts/common.sh@352 -- # local d=1 00:14:51.225 05:29:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:51.225 05:29:54 -- scripts/common.sh@354 -- # echo 1 00:14:51.225 05:29:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:51.225 05:29:54 -- scripts/common.sh@365 -- # decimal 2 00:14:51.225 05:29:54 -- scripts/common.sh@352 -- # local d=2 00:14:51.226 05:29:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:51.226 05:29:54 -- scripts/common.sh@354 -- # echo 2 00:14:51.226 05:29:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:51.226 05:29:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:51.226 05:29:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:51.226 05:29:54 -- scripts/common.sh@367 -- # return 0 00:14:51.226 05:29:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:51.226 05:29:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:51.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.226 --rc genhtml_branch_coverage=1 00:14:51.226 --rc genhtml_function_coverage=1 00:14:51.226 --rc genhtml_legend=1 00:14:51.226 --rc geninfo_all_blocks=1 00:14:51.226 --rc geninfo_unexecuted_blocks=1 00:14:51.226 00:14:51.226 ' 00:14:51.226 05:29:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:51.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.226 --rc genhtml_branch_coverage=1 00:14:51.226 --rc genhtml_function_coverage=1 00:14:51.226 --rc genhtml_legend=1 00:14:51.226 --rc geninfo_all_blocks=1 00:14:51.226 --rc geninfo_unexecuted_blocks=1 00:14:51.226 00:14:51.226 ' 00:14:51.226 05:29:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:51.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.226 --rc genhtml_branch_coverage=1 00:14:51.226 --rc genhtml_function_coverage=1 00:14:51.226 --rc genhtml_legend=1 00:14:51.226 --rc geninfo_all_blocks=1 00:14:51.226 --rc geninfo_unexecuted_blocks=1 00:14:51.226 00:14:51.226 ' 00:14:51.226 05:29:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:51.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.226 --rc genhtml_branch_coverage=1 00:14:51.226 --rc genhtml_function_coverage=1 00:14:51.226 --rc genhtml_legend=1 00:14:51.226 --rc geninfo_all_blocks=1 00:14:51.226 --rc geninfo_unexecuted_blocks=1 00:14:51.226 00:14:51.226 ' 00:14:51.226 05:29:54 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:51.226 05:29:54 -- nvmf/common.sh@7 -- # uname -s 00:14:51.226 05:29:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:51.226 05:29:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:51.226 05:29:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:51.226 05:29:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:51.226 05:29:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:51.226 05:29:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:51.226 05:29:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:51.226 05:29:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:51.226 05:29:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:51.226 05:29:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:51.226 05:29:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:51.226 05:29:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:51.226 05:29:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:51.226 05:29:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:51.226 05:29:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:51.226 05:29:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:51.226 05:29:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.226 05:29:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.226 05:29:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.226 05:29:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.226 05:29:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.226 05:29:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.226 05:29:54 -- paths/export.sh@5 -- # export PATH 00:14:51.226 05:29:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.226 05:29:54 -- nvmf/common.sh@46 -- # : 0 00:14:51.226 05:29:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:51.226 05:29:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:51.226 05:29:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:51.226 05:29:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:51.226 05:29:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:51.226 05:29:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:51.226 05:29:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:51.226 05:29:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:51.226 05:29:54 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:51.226 05:29:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:51.226 05:29:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:51.226 05:29:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:51.226 05:29:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:51.226 05:29:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:51.226 05:29:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.226 05:29:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.226 05:29:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.226 05:29:54 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:51.226 05:29:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:51.226 05:29:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:51.226 05:29:54 -- common/autotest_common.sh@10 -- # set +x 00:14:59.371 05:30:01 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:59.371 05:30:01 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:59.371 05:30:01 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:59.371 05:30:01 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:59.371 05:30:01 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:59.371 05:30:01 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:59.371 05:30:01 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:59.371 05:30:01 -- nvmf/common.sh@294 -- # net_devs=() 00:14:59.371 05:30:01 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:59.371 05:30:01 -- nvmf/common.sh@295 -- # e810=() 00:14:59.371 05:30:01 -- nvmf/common.sh@295 -- # local -ga e810 00:14:59.371 05:30:01 -- nvmf/common.sh@296 -- # x722=() 00:14:59.371 05:30:01 -- nvmf/common.sh@296 -- # local -ga x722 00:14:59.371 05:30:01 -- nvmf/common.sh@297 -- # mlx=() 00:14:59.371 05:30:01 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:59.371 05:30:01 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:59.371 05:30:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:59.371 05:30:01 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:59.371 05:30:01 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:59.371 05:30:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:59.371 05:30:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:59.371 05:30:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:59.371 05:30:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:59.371 05:30:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:59.371 05:30:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:59.371 05:30:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:59.371 05:30:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:59.371 05:30:01 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:59.371 05:30:01 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:59.371 05:30:01 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:59.371 05:30:01 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:59.371 05:30:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:59.371 05:30:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:59.371 05:30:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:59.371 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:59.371 05:30:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:59.371 05:30:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:59.371 05:30:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.371 05:30:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.371 05:30:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:59.371 05:30:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:59.371 05:30:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:59.371 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:59.371 05:30:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:59.371 05:30:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:59.371 05:30:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.371 05:30:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.371 05:30:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:59.371 05:30:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:59.371 05:30:01 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:59.371 05:30:01 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:59.371 05:30:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:59.371 05:30:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.371 05:30:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:59.371 05:30:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.371 05:30:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:59.371 Found net devices under 0000:31:00.0: cvl_0_0 00:14:59.371 05:30:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.371 05:30:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:59.371 05:30:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.371 05:30:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:59.371 05:30:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.371 05:30:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:59.371 Found net devices under 0000:31:00.1: cvl_0_1 00:14:59.371 05:30:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.371 05:30:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:59.371 05:30:01 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:59.371 05:30:01 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:59.371 05:30:01 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:59.371 05:30:01 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:59.371 05:30:01 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:59.371 05:30:01 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:59.371 05:30:01 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:59.371 05:30:01 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:59.371 05:30:01 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:59.371 05:30:01 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:59.371 05:30:01 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:59.371 05:30:01 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:59.371 05:30:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:59.371 05:30:01 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:59.371 05:30:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:59.371 05:30:01 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:59.371 05:30:01 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:59.371 05:30:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:59.371 05:30:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:59.371 05:30:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:59.371 05:30:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:59.372 05:30:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:59.372 05:30:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:59.372 05:30:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:59.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:59.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:14:59.372 00:14:59.372 --- 10.0.0.2 ping statistics --- 00:14:59.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.372 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:14:59.372 05:30:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:59.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:59.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:14:59.372 00:14:59.372 --- 10.0.0.1 ping statistics --- 00:14:59.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.372 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:14:59.372 05:30:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:59.372 05:30:01 -- nvmf/common.sh@410 -- # return 0 00:14:59.372 05:30:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:59.372 05:30:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:59.372 05:30:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:59.372 05:30:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:59.372 05:30:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:59.372 05:30:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:59.372 05:30:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:59.372 05:30:01 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:59.372 05:30:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:59.372 05:30:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:59.372 05:30:01 -- common/autotest_common.sh@10 -- # set +x 00:14:59.372 05:30:01 -- nvmf/common.sh@469 -- # nvmfpid=1755891 00:14:59.372 05:30:01 -- nvmf/common.sh@470 -- # waitforlisten 1755891 00:14:59.372 05:30:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:59.372 05:30:01 -- common/autotest_common.sh@829 -- # '[' -z 1755891 ']' 00:14:59.372 05:30:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.372 05:30:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:59.372 05:30:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.372 05:30:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:59.372 05:30:01 -- common/autotest_common.sh@10 -- # set +x 00:14:59.372 [2024-12-07 05:30:01.587268] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:59.372 [2024-12-07 05:30:01.587326] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.372 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.372 [2024-12-07 05:30:01.671721] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.372 [2024-12-07 05:30:01.769215] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:59.372 [2024-12-07 05:30:01.769368] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.372 [2024-12-07 05:30:01.769378] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.372 [2024-12-07 05:30:01.769385] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.372 [2024-12-07 05:30:01.769412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.372 05:30:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:59.372 05:30:02 -- common/autotest_common.sh@862 -- # return 0 00:14:59.372 05:30:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:59.372 05:30:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:59.372 05:30:02 -- common/autotest_common.sh@10 -- # set +x 00:14:59.372 05:30:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:59.372 05:30:02 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:59.372 05:30:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.372 05:30:02 -- common/autotest_common.sh@10 -- # set +x 00:14:59.372 [2024-12-07 05:30:02.428994] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:59.372 05:30:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.372 05:30:02 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:59.372 05:30:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.372 05:30:02 -- common/autotest_common.sh@10 -- # set +x 00:14:59.372 05:30:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.372 05:30:02 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:59.372 05:30:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.372 05:30:02 -- common/autotest_common.sh@10 -- # set +x 00:14:59.372 [2024-12-07 05:30:02.453255] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:59.372 05:30:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.372 05:30:02 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:59.372 05:30:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.372 05:30:02 -- common/autotest_common.sh@10 -- # set +x 00:14:59.372 NULL1 00:14:59.372 05:30:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.372 05:30:02 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:59.372 05:30:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.372 05:30:02 -- common/autotest_common.sh@10 -- # set +x 00:14:59.372 05:30:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.372 05:30:02 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:59.372 05:30:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.372 05:30:02 -- common/autotest_common.sh@10 -- # set +x 00:14:59.372 05:30:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.372 05:30:02 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:59.372 [2024-12-07 05:30:02.520749] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:59.372 [2024-12-07 05:30:02.520790] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1756160 ] 00:14:59.372 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.945 Attached to nqn.2016-06.io.spdk:cnode1 00:14:59.945 Namespace ID: 1 size: 1GB 00:14:59.945 fused_ordering(0) 00:14:59.945 fused_ordering(1) 00:14:59.945 fused_ordering(2) 00:14:59.945 fused_ordering(3) 00:14:59.945 fused_ordering(4) 00:14:59.945 fused_ordering(5) 00:14:59.945 fused_ordering(6) 00:14:59.945 fused_ordering(7) 00:14:59.945 fused_ordering(8) 00:14:59.945 fused_ordering(9) 00:14:59.945 fused_ordering(10) 00:14:59.945 fused_ordering(11) 00:14:59.945 fused_ordering(12) 00:14:59.945 fused_ordering(13) 00:14:59.945 fused_ordering(14) 00:14:59.945 fused_ordering(15) 00:14:59.945 fused_ordering(16) 00:14:59.945 fused_ordering(17) 00:14:59.945 fused_ordering(18) 00:14:59.945 fused_ordering(19) 00:14:59.945 fused_ordering(20) 00:14:59.945 fused_ordering(21) 00:14:59.945 fused_ordering(22) 00:14:59.945 fused_ordering(23) 00:14:59.945 fused_ordering(24) 00:14:59.945 fused_ordering(25) 00:14:59.945 fused_ordering(26) 00:14:59.945 fused_ordering(27) 00:14:59.945 fused_ordering(28) 00:14:59.945 fused_ordering(29) 00:14:59.945 fused_ordering(30) 00:14:59.945 fused_ordering(31) 00:14:59.945 fused_ordering(32) 00:14:59.945 fused_ordering(33) 00:14:59.945 fused_ordering(34) 00:14:59.945 fused_ordering(35) 00:14:59.945 fused_ordering(36) 00:14:59.945 fused_ordering(37) 00:14:59.945 fused_ordering(38) 00:14:59.945 fused_ordering(39) 00:14:59.945 fused_ordering(40) 00:14:59.945 fused_ordering(41) 00:14:59.945 fused_ordering(42) 00:14:59.945 fused_ordering(43) 00:14:59.945 fused_ordering(44) 00:14:59.945 fused_ordering(45) 00:14:59.945 fused_ordering(46) 00:14:59.945 fused_ordering(47) 00:14:59.945 fused_ordering(48) 00:14:59.945 fused_ordering(49) 00:14:59.945 fused_ordering(50) 00:14:59.945 fused_ordering(51) 00:14:59.945 fused_ordering(52) 00:14:59.945 fused_ordering(53) 00:14:59.945 fused_ordering(54) 00:14:59.945 fused_ordering(55) 00:14:59.945 fused_ordering(56) 00:14:59.945 fused_ordering(57) 00:14:59.945 fused_ordering(58) 00:14:59.945 fused_ordering(59) 00:14:59.945 fused_ordering(60) 00:14:59.945 fused_ordering(61) 00:14:59.945 fused_ordering(62) 00:14:59.945 fused_ordering(63) 00:14:59.945 fused_ordering(64) 00:14:59.945 fused_ordering(65) 00:14:59.945 fused_ordering(66) 00:14:59.945 fused_ordering(67) 00:14:59.945 fused_ordering(68) 00:14:59.945 fused_ordering(69) 00:14:59.945 fused_ordering(70) 00:14:59.945 fused_ordering(71) 00:14:59.945 fused_ordering(72) 00:14:59.945 fused_ordering(73) 00:14:59.945 fused_ordering(74) 00:14:59.945 fused_ordering(75) 00:14:59.945 fused_ordering(76) 00:14:59.945 fused_ordering(77) 00:14:59.945 fused_ordering(78) 00:14:59.945 fused_ordering(79) 00:14:59.945 fused_ordering(80) 00:14:59.945 fused_ordering(81) 00:14:59.945 fused_ordering(82) 00:14:59.945 fused_ordering(83) 00:14:59.945 fused_ordering(84) 00:14:59.945 fused_ordering(85) 00:14:59.945 fused_ordering(86) 00:14:59.945 fused_ordering(87) 00:14:59.945 fused_ordering(88) 00:14:59.945 fused_ordering(89) 00:14:59.945 fused_ordering(90) 00:14:59.945 fused_ordering(91) 00:14:59.945 fused_ordering(92) 00:14:59.945 fused_ordering(93) 00:14:59.945 fused_ordering(94) 00:14:59.945 fused_ordering(95) 00:14:59.945 fused_ordering(96) 00:14:59.945 fused_ordering(97) 00:14:59.945 fused_ordering(98) 00:14:59.945 fused_ordering(99) 00:14:59.945 fused_ordering(100) 00:14:59.945 fused_ordering(101) 00:14:59.945 fused_ordering(102) 00:14:59.945 fused_ordering(103) 00:14:59.945 fused_ordering(104) 00:14:59.945 fused_ordering(105) 00:14:59.945 fused_ordering(106) 00:14:59.945 fused_ordering(107) 00:14:59.945 fused_ordering(108) 00:14:59.945 fused_ordering(109) 00:14:59.945 fused_ordering(110) 00:14:59.945 fused_ordering(111) 00:14:59.945 fused_ordering(112) 00:14:59.945 fused_ordering(113) 00:14:59.945 fused_ordering(114) 00:14:59.946 fused_ordering(115) 00:14:59.946 fused_ordering(116) 00:14:59.946 fused_ordering(117) 00:14:59.946 fused_ordering(118) 00:14:59.946 fused_ordering(119) 00:14:59.946 fused_ordering(120) 00:14:59.946 fused_ordering(121) 00:14:59.946 fused_ordering(122) 00:14:59.946 fused_ordering(123) 00:14:59.946 fused_ordering(124) 00:14:59.946 fused_ordering(125) 00:14:59.946 fused_ordering(126) 00:14:59.946 fused_ordering(127) 00:14:59.946 fused_ordering(128) 00:14:59.946 fused_ordering(129) 00:14:59.946 fused_ordering(130) 00:14:59.946 fused_ordering(131) 00:14:59.946 fused_ordering(132) 00:14:59.946 fused_ordering(133) 00:14:59.946 fused_ordering(134) 00:14:59.946 fused_ordering(135) 00:14:59.946 fused_ordering(136) 00:14:59.946 fused_ordering(137) 00:14:59.946 fused_ordering(138) 00:14:59.946 fused_ordering(139) 00:14:59.946 fused_ordering(140) 00:14:59.946 fused_ordering(141) 00:14:59.946 fused_ordering(142) 00:14:59.946 fused_ordering(143) 00:14:59.946 fused_ordering(144) 00:14:59.946 fused_ordering(145) 00:14:59.946 fused_ordering(146) 00:14:59.946 fused_ordering(147) 00:14:59.946 fused_ordering(148) 00:14:59.946 fused_ordering(149) 00:14:59.946 fused_ordering(150) 00:14:59.946 fused_ordering(151) 00:14:59.946 fused_ordering(152) 00:14:59.946 fused_ordering(153) 00:14:59.946 fused_ordering(154) 00:14:59.946 fused_ordering(155) 00:14:59.946 fused_ordering(156) 00:14:59.946 fused_ordering(157) 00:14:59.946 fused_ordering(158) 00:14:59.946 fused_ordering(159) 00:14:59.946 fused_ordering(160) 00:14:59.946 fused_ordering(161) 00:14:59.946 fused_ordering(162) 00:14:59.946 fused_ordering(163) 00:14:59.946 fused_ordering(164) 00:14:59.946 fused_ordering(165) 00:14:59.946 fused_ordering(166) 00:14:59.946 fused_ordering(167) 00:14:59.946 fused_ordering(168) 00:14:59.946 fused_ordering(169) 00:14:59.946 fused_ordering(170) 00:14:59.946 fused_ordering(171) 00:14:59.946 fused_ordering(172) 00:14:59.946 fused_ordering(173) 00:14:59.946 fused_ordering(174) 00:14:59.946 fused_ordering(175) 00:14:59.946 fused_ordering(176) 00:14:59.946 fused_ordering(177) 00:14:59.946 fused_ordering(178) 00:14:59.946 fused_ordering(179) 00:14:59.946 fused_ordering(180) 00:14:59.946 fused_ordering(181) 00:14:59.946 fused_ordering(182) 00:14:59.946 fused_ordering(183) 00:14:59.946 fused_ordering(184) 00:14:59.946 fused_ordering(185) 00:14:59.946 fused_ordering(186) 00:14:59.946 fused_ordering(187) 00:14:59.946 fused_ordering(188) 00:14:59.946 fused_ordering(189) 00:14:59.946 fused_ordering(190) 00:14:59.946 fused_ordering(191) 00:14:59.946 fused_ordering(192) 00:14:59.946 fused_ordering(193) 00:14:59.946 fused_ordering(194) 00:14:59.946 fused_ordering(195) 00:14:59.946 fused_ordering(196) 00:14:59.946 fused_ordering(197) 00:14:59.946 fused_ordering(198) 00:14:59.946 fused_ordering(199) 00:14:59.946 fused_ordering(200) 00:14:59.946 fused_ordering(201) 00:14:59.946 fused_ordering(202) 00:14:59.946 fused_ordering(203) 00:14:59.946 fused_ordering(204) 00:14:59.946 fused_ordering(205) 00:15:00.207 fused_ordering(206) 00:15:00.207 fused_ordering(207) 00:15:00.207 fused_ordering(208) 00:15:00.207 fused_ordering(209) 00:15:00.207 fused_ordering(210) 00:15:00.207 fused_ordering(211) 00:15:00.207 fused_ordering(212) 00:15:00.207 fused_ordering(213) 00:15:00.207 fused_ordering(214) 00:15:00.207 fused_ordering(215) 00:15:00.207 fused_ordering(216) 00:15:00.207 fused_ordering(217) 00:15:00.207 fused_ordering(218) 00:15:00.207 fused_ordering(219) 00:15:00.207 fused_ordering(220) 00:15:00.207 fused_ordering(221) 00:15:00.207 fused_ordering(222) 00:15:00.207 fused_ordering(223) 00:15:00.207 fused_ordering(224) 00:15:00.207 fused_ordering(225) 00:15:00.207 fused_ordering(226) 00:15:00.207 fused_ordering(227) 00:15:00.207 fused_ordering(228) 00:15:00.207 fused_ordering(229) 00:15:00.207 fused_ordering(230) 00:15:00.207 fused_ordering(231) 00:15:00.207 fused_ordering(232) 00:15:00.207 fused_ordering(233) 00:15:00.207 fused_ordering(234) 00:15:00.207 fused_ordering(235) 00:15:00.207 fused_ordering(236) 00:15:00.207 fused_ordering(237) 00:15:00.207 fused_ordering(238) 00:15:00.207 fused_ordering(239) 00:15:00.207 fused_ordering(240) 00:15:00.207 fused_ordering(241) 00:15:00.207 fused_ordering(242) 00:15:00.207 fused_ordering(243) 00:15:00.207 fused_ordering(244) 00:15:00.207 fused_ordering(245) 00:15:00.207 fused_ordering(246) 00:15:00.207 fused_ordering(247) 00:15:00.207 fused_ordering(248) 00:15:00.207 fused_ordering(249) 00:15:00.207 fused_ordering(250) 00:15:00.207 fused_ordering(251) 00:15:00.207 fused_ordering(252) 00:15:00.207 fused_ordering(253) 00:15:00.207 fused_ordering(254) 00:15:00.207 fused_ordering(255) 00:15:00.207 fused_ordering(256) 00:15:00.207 fused_ordering(257) 00:15:00.207 fused_ordering(258) 00:15:00.207 fused_ordering(259) 00:15:00.207 fused_ordering(260) 00:15:00.207 fused_ordering(261) 00:15:00.207 fused_ordering(262) 00:15:00.207 fused_ordering(263) 00:15:00.207 fused_ordering(264) 00:15:00.207 fused_ordering(265) 00:15:00.207 fused_ordering(266) 00:15:00.207 fused_ordering(267) 00:15:00.207 fused_ordering(268) 00:15:00.207 fused_ordering(269) 00:15:00.207 fused_ordering(270) 00:15:00.207 fused_ordering(271) 00:15:00.207 fused_ordering(272) 00:15:00.207 fused_ordering(273) 00:15:00.207 fused_ordering(274) 00:15:00.207 fused_ordering(275) 00:15:00.207 fused_ordering(276) 00:15:00.207 fused_ordering(277) 00:15:00.207 fused_ordering(278) 00:15:00.207 fused_ordering(279) 00:15:00.207 fused_ordering(280) 00:15:00.207 fused_ordering(281) 00:15:00.207 fused_ordering(282) 00:15:00.207 fused_ordering(283) 00:15:00.207 fused_ordering(284) 00:15:00.207 fused_ordering(285) 00:15:00.207 fused_ordering(286) 00:15:00.207 fused_ordering(287) 00:15:00.207 fused_ordering(288) 00:15:00.207 fused_ordering(289) 00:15:00.207 fused_ordering(290) 00:15:00.207 fused_ordering(291) 00:15:00.207 fused_ordering(292) 00:15:00.207 fused_ordering(293) 00:15:00.207 fused_ordering(294) 00:15:00.207 fused_ordering(295) 00:15:00.207 fused_ordering(296) 00:15:00.207 fused_ordering(297) 00:15:00.207 fused_ordering(298) 00:15:00.207 fused_ordering(299) 00:15:00.207 fused_ordering(300) 00:15:00.207 fused_ordering(301) 00:15:00.207 fused_ordering(302) 00:15:00.207 fused_ordering(303) 00:15:00.207 fused_ordering(304) 00:15:00.207 fused_ordering(305) 00:15:00.207 fused_ordering(306) 00:15:00.207 fused_ordering(307) 00:15:00.207 fused_ordering(308) 00:15:00.207 fused_ordering(309) 00:15:00.207 fused_ordering(310) 00:15:00.207 fused_ordering(311) 00:15:00.207 fused_ordering(312) 00:15:00.207 fused_ordering(313) 00:15:00.207 fused_ordering(314) 00:15:00.207 fused_ordering(315) 00:15:00.207 fused_ordering(316) 00:15:00.207 fused_ordering(317) 00:15:00.207 fused_ordering(318) 00:15:00.207 fused_ordering(319) 00:15:00.207 fused_ordering(320) 00:15:00.207 fused_ordering(321) 00:15:00.207 fused_ordering(322) 00:15:00.207 fused_ordering(323) 00:15:00.207 fused_ordering(324) 00:15:00.207 fused_ordering(325) 00:15:00.207 fused_ordering(326) 00:15:00.207 fused_ordering(327) 00:15:00.207 fused_ordering(328) 00:15:00.207 fused_ordering(329) 00:15:00.207 fused_ordering(330) 00:15:00.207 fused_ordering(331) 00:15:00.207 fused_ordering(332) 00:15:00.207 fused_ordering(333) 00:15:00.207 fused_ordering(334) 00:15:00.207 fused_ordering(335) 00:15:00.207 fused_ordering(336) 00:15:00.207 fused_ordering(337) 00:15:00.207 fused_ordering(338) 00:15:00.207 fused_ordering(339) 00:15:00.207 fused_ordering(340) 00:15:00.207 fused_ordering(341) 00:15:00.207 fused_ordering(342) 00:15:00.207 fused_ordering(343) 00:15:00.207 fused_ordering(344) 00:15:00.207 fused_ordering(345) 00:15:00.207 fused_ordering(346) 00:15:00.207 fused_ordering(347) 00:15:00.207 fused_ordering(348) 00:15:00.207 fused_ordering(349) 00:15:00.207 fused_ordering(350) 00:15:00.207 fused_ordering(351) 00:15:00.207 fused_ordering(352) 00:15:00.207 fused_ordering(353) 00:15:00.207 fused_ordering(354) 00:15:00.207 fused_ordering(355) 00:15:00.207 fused_ordering(356) 00:15:00.207 fused_ordering(357) 00:15:00.207 fused_ordering(358) 00:15:00.207 fused_ordering(359) 00:15:00.207 fused_ordering(360) 00:15:00.207 fused_ordering(361) 00:15:00.207 fused_ordering(362) 00:15:00.207 fused_ordering(363) 00:15:00.207 fused_ordering(364) 00:15:00.207 fused_ordering(365) 00:15:00.207 fused_ordering(366) 00:15:00.207 fused_ordering(367) 00:15:00.207 fused_ordering(368) 00:15:00.207 fused_ordering(369) 00:15:00.207 fused_ordering(370) 00:15:00.207 fused_ordering(371) 00:15:00.207 fused_ordering(372) 00:15:00.207 fused_ordering(373) 00:15:00.207 fused_ordering(374) 00:15:00.207 fused_ordering(375) 00:15:00.207 fused_ordering(376) 00:15:00.207 fused_ordering(377) 00:15:00.207 fused_ordering(378) 00:15:00.207 fused_ordering(379) 00:15:00.207 fused_ordering(380) 00:15:00.207 fused_ordering(381) 00:15:00.207 fused_ordering(382) 00:15:00.207 fused_ordering(383) 00:15:00.207 fused_ordering(384) 00:15:00.207 fused_ordering(385) 00:15:00.207 fused_ordering(386) 00:15:00.207 fused_ordering(387) 00:15:00.207 fused_ordering(388) 00:15:00.207 fused_ordering(389) 00:15:00.207 fused_ordering(390) 00:15:00.207 fused_ordering(391) 00:15:00.207 fused_ordering(392) 00:15:00.207 fused_ordering(393) 00:15:00.207 fused_ordering(394) 00:15:00.207 fused_ordering(395) 00:15:00.207 fused_ordering(396) 00:15:00.207 fused_ordering(397) 00:15:00.207 fused_ordering(398) 00:15:00.207 fused_ordering(399) 00:15:00.207 fused_ordering(400) 00:15:00.207 fused_ordering(401) 00:15:00.207 fused_ordering(402) 00:15:00.207 fused_ordering(403) 00:15:00.207 fused_ordering(404) 00:15:00.207 fused_ordering(405) 00:15:00.207 fused_ordering(406) 00:15:00.207 fused_ordering(407) 00:15:00.207 fused_ordering(408) 00:15:00.207 fused_ordering(409) 00:15:00.207 fused_ordering(410) 00:15:00.469 fused_ordering(411) 00:15:00.469 fused_ordering(412) 00:15:00.469 fused_ordering(413) 00:15:00.469 fused_ordering(414) 00:15:00.469 fused_ordering(415) 00:15:00.469 fused_ordering(416) 00:15:00.469 fused_ordering(417) 00:15:00.469 fused_ordering(418) 00:15:00.469 fused_ordering(419) 00:15:00.469 fused_ordering(420) 00:15:00.469 fused_ordering(421) 00:15:00.469 fused_ordering(422) 00:15:00.469 fused_ordering(423) 00:15:00.469 fused_ordering(424) 00:15:00.469 fused_ordering(425) 00:15:00.469 fused_ordering(426) 00:15:00.469 fused_ordering(427) 00:15:00.469 fused_ordering(428) 00:15:00.469 fused_ordering(429) 00:15:00.469 fused_ordering(430) 00:15:00.469 fused_ordering(431) 00:15:00.469 fused_ordering(432) 00:15:00.469 fused_ordering(433) 00:15:00.469 fused_ordering(434) 00:15:00.469 fused_ordering(435) 00:15:00.469 fused_ordering(436) 00:15:00.469 fused_ordering(437) 00:15:00.469 fused_ordering(438) 00:15:00.469 fused_ordering(439) 00:15:00.469 fused_ordering(440) 00:15:00.469 fused_ordering(441) 00:15:00.469 fused_ordering(442) 00:15:00.469 fused_ordering(443) 00:15:00.469 fused_ordering(444) 00:15:00.469 fused_ordering(445) 00:15:00.469 fused_ordering(446) 00:15:00.469 fused_ordering(447) 00:15:00.469 fused_ordering(448) 00:15:00.469 fused_ordering(449) 00:15:00.469 fused_ordering(450) 00:15:00.469 fused_ordering(451) 00:15:00.469 fused_ordering(452) 00:15:00.469 fused_ordering(453) 00:15:00.469 fused_ordering(454) 00:15:00.469 fused_ordering(455) 00:15:00.469 fused_ordering(456) 00:15:00.469 fused_ordering(457) 00:15:00.469 fused_ordering(458) 00:15:00.469 fused_ordering(459) 00:15:00.469 fused_ordering(460) 00:15:00.469 fused_ordering(461) 00:15:00.469 fused_ordering(462) 00:15:00.469 fused_ordering(463) 00:15:00.469 fused_ordering(464) 00:15:00.469 fused_ordering(465) 00:15:00.469 fused_ordering(466) 00:15:00.469 fused_ordering(467) 00:15:00.469 fused_ordering(468) 00:15:00.469 fused_ordering(469) 00:15:00.469 fused_ordering(470) 00:15:00.469 fused_ordering(471) 00:15:00.469 fused_ordering(472) 00:15:00.469 fused_ordering(473) 00:15:00.469 fused_ordering(474) 00:15:00.469 fused_ordering(475) 00:15:00.469 fused_ordering(476) 00:15:00.469 fused_ordering(477) 00:15:00.469 fused_ordering(478) 00:15:00.469 fused_ordering(479) 00:15:00.469 fused_ordering(480) 00:15:00.469 fused_ordering(481) 00:15:00.469 fused_ordering(482) 00:15:00.470 fused_ordering(483) 00:15:00.470 fused_ordering(484) 00:15:00.470 fused_ordering(485) 00:15:00.470 fused_ordering(486) 00:15:00.470 fused_ordering(487) 00:15:00.470 fused_ordering(488) 00:15:00.470 fused_ordering(489) 00:15:00.470 fused_ordering(490) 00:15:00.470 fused_ordering(491) 00:15:00.470 fused_ordering(492) 00:15:00.470 fused_ordering(493) 00:15:00.470 fused_ordering(494) 00:15:00.470 fused_ordering(495) 00:15:00.470 fused_ordering(496) 00:15:00.470 fused_ordering(497) 00:15:00.470 fused_ordering(498) 00:15:00.470 fused_ordering(499) 00:15:00.470 fused_ordering(500) 00:15:00.470 fused_ordering(501) 00:15:00.470 fused_ordering(502) 00:15:00.470 fused_ordering(503) 00:15:00.470 fused_ordering(504) 00:15:00.470 fused_ordering(505) 00:15:00.470 fused_ordering(506) 00:15:00.470 fused_ordering(507) 00:15:00.470 fused_ordering(508) 00:15:00.470 fused_ordering(509) 00:15:00.470 fused_ordering(510) 00:15:00.470 fused_ordering(511) 00:15:00.470 fused_ordering(512) 00:15:00.470 fused_ordering(513) 00:15:00.470 fused_ordering(514) 00:15:00.470 fused_ordering(515) 00:15:00.470 fused_ordering(516) 00:15:00.470 fused_ordering(517) 00:15:00.470 fused_ordering(518) 00:15:00.470 fused_ordering(519) 00:15:00.470 fused_ordering(520) 00:15:00.470 fused_ordering(521) 00:15:00.470 fused_ordering(522) 00:15:00.470 fused_ordering(523) 00:15:00.470 fused_ordering(524) 00:15:00.470 fused_ordering(525) 00:15:00.470 fused_ordering(526) 00:15:00.470 fused_ordering(527) 00:15:00.470 fused_ordering(528) 00:15:00.470 fused_ordering(529) 00:15:00.470 fused_ordering(530) 00:15:00.470 fused_ordering(531) 00:15:00.470 fused_ordering(532) 00:15:00.470 fused_ordering(533) 00:15:00.470 fused_ordering(534) 00:15:00.470 fused_ordering(535) 00:15:00.470 fused_ordering(536) 00:15:00.470 fused_ordering(537) 00:15:00.470 fused_ordering(538) 00:15:00.470 fused_ordering(539) 00:15:00.470 fused_ordering(540) 00:15:00.470 fused_ordering(541) 00:15:00.470 fused_ordering(542) 00:15:00.470 fused_ordering(543) 00:15:00.470 fused_ordering(544) 00:15:00.470 fused_ordering(545) 00:15:00.470 fused_ordering(546) 00:15:00.470 fused_ordering(547) 00:15:00.470 fused_ordering(548) 00:15:00.470 fused_ordering(549) 00:15:00.470 fused_ordering(550) 00:15:00.470 fused_ordering(551) 00:15:00.470 fused_ordering(552) 00:15:00.470 fused_ordering(553) 00:15:00.470 fused_ordering(554) 00:15:00.470 fused_ordering(555) 00:15:00.470 fused_ordering(556) 00:15:00.470 fused_ordering(557) 00:15:00.470 fused_ordering(558) 00:15:00.470 fused_ordering(559) 00:15:00.470 fused_ordering(560) 00:15:00.470 fused_ordering(561) 00:15:00.470 fused_ordering(562) 00:15:00.470 fused_ordering(563) 00:15:00.470 fused_ordering(564) 00:15:00.470 fused_ordering(565) 00:15:00.470 fused_ordering(566) 00:15:00.470 fused_ordering(567) 00:15:00.470 fused_ordering(568) 00:15:00.470 fused_ordering(569) 00:15:00.470 fused_ordering(570) 00:15:00.470 fused_ordering(571) 00:15:00.470 fused_ordering(572) 00:15:00.470 fused_ordering(573) 00:15:00.470 fused_ordering(574) 00:15:00.470 fused_ordering(575) 00:15:00.470 fused_ordering(576) 00:15:00.470 fused_ordering(577) 00:15:00.470 fused_ordering(578) 00:15:00.470 fused_ordering(579) 00:15:00.470 fused_ordering(580) 00:15:00.470 fused_ordering(581) 00:15:00.470 fused_ordering(582) 00:15:00.470 fused_ordering(583) 00:15:00.470 fused_ordering(584) 00:15:00.470 fused_ordering(585) 00:15:00.470 fused_ordering(586) 00:15:00.470 fused_ordering(587) 00:15:00.470 fused_ordering(588) 00:15:00.470 fused_ordering(589) 00:15:00.470 fused_ordering(590) 00:15:00.470 fused_ordering(591) 00:15:00.470 fused_ordering(592) 00:15:00.470 fused_ordering(593) 00:15:00.470 fused_ordering(594) 00:15:00.470 fused_ordering(595) 00:15:00.470 fused_ordering(596) 00:15:00.470 fused_ordering(597) 00:15:00.470 fused_ordering(598) 00:15:00.470 fused_ordering(599) 00:15:00.470 fused_ordering(600) 00:15:00.470 fused_ordering(601) 00:15:00.470 fused_ordering(602) 00:15:00.470 fused_ordering(603) 00:15:00.470 fused_ordering(604) 00:15:00.470 fused_ordering(605) 00:15:00.470 fused_ordering(606) 00:15:00.470 fused_ordering(607) 00:15:00.470 fused_ordering(608) 00:15:00.470 fused_ordering(609) 00:15:00.470 fused_ordering(610) 00:15:00.470 fused_ordering(611) 00:15:00.470 fused_ordering(612) 00:15:00.470 fused_ordering(613) 00:15:00.470 fused_ordering(614) 00:15:00.470 fused_ordering(615) 00:15:01.040 fused_ordering(616) 00:15:01.040 fused_ordering(617) 00:15:01.040 fused_ordering(618) 00:15:01.040 fused_ordering(619) 00:15:01.041 fused_ordering(620) 00:15:01.041 fused_ordering(621) 00:15:01.041 fused_ordering(622) 00:15:01.041 fused_ordering(623) 00:15:01.041 fused_ordering(624) 00:15:01.041 fused_ordering(625) 00:15:01.041 fused_ordering(626) 00:15:01.041 fused_ordering(627) 00:15:01.041 fused_ordering(628) 00:15:01.041 fused_ordering(629) 00:15:01.041 fused_ordering(630) 00:15:01.041 fused_ordering(631) 00:15:01.041 fused_ordering(632) 00:15:01.041 fused_ordering(633) 00:15:01.041 fused_ordering(634) 00:15:01.041 fused_ordering(635) 00:15:01.041 fused_ordering(636) 00:15:01.041 fused_ordering(637) 00:15:01.041 fused_ordering(638) 00:15:01.041 fused_ordering(639) 00:15:01.041 fused_ordering(640) 00:15:01.041 fused_ordering(641) 00:15:01.041 fused_ordering(642) 00:15:01.041 fused_ordering(643) 00:15:01.041 fused_ordering(644) 00:15:01.041 fused_ordering(645) 00:15:01.041 fused_ordering(646) 00:15:01.041 fused_ordering(647) 00:15:01.041 fused_ordering(648) 00:15:01.041 fused_ordering(649) 00:15:01.041 fused_ordering(650) 00:15:01.041 fused_ordering(651) 00:15:01.041 fused_ordering(652) 00:15:01.041 fused_ordering(653) 00:15:01.041 fused_ordering(654) 00:15:01.041 fused_ordering(655) 00:15:01.041 fused_ordering(656) 00:15:01.041 fused_ordering(657) 00:15:01.041 fused_ordering(658) 00:15:01.041 fused_ordering(659) 00:15:01.041 fused_ordering(660) 00:15:01.041 fused_ordering(661) 00:15:01.041 fused_ordering(662) 00:15:01.041 fused_ordering(663) 00:15:01.041 fused_ordering(664) 00:15:01.041 fused_ordering(665) 00:15:01.041 fused_ordering(666) 00:15:01.041 fused_ordering(667) 00:15:01.041 fused_ordering(668) 00:15:01.041 fused_ordering(669) 00:15:01.041 fused_ordering(670) 00:15:01.041 fused_ordering(671) 00:15:01.041 fused_ordering(672) 00:15:01.041 fused_ordering(673) 00:15:01.041 fused_ordering(674) 00:15:01.041 fused_ordering(675) 00:15:01.041 fused_ordering(676) 00:15:01.041 fused_ordering(677) 00:15:01.041 fused_ordering(678) 00:15:01.041 fused_ordering(679) 00:15:01.041 fused_ordering(680) 00:15:01.041 fused_ordering(681) 00:15:01.041 fused_ordering(682) 00:15:01.041 fused_ordering(683) 00:15:01.041 fused_ordering(684) 00:15:01.041 fused_ordering(685) 00:15:01.041 fused_ordering(686) 00:15:01.041 fused_ordering(687) 00:15:01.041 fused_ordering(688) 00:15:01.041 fused_ordering(689) 00:15:01.041 fused_ordering(690) 00:15:01.041 fused_ordering(691) 00:15:01.041 fused_ordering(692) 00:15:01.041 fused_ordering(693) 00:15:01.041 fused_ordering(694) 00:15:01.041 fused_ordering(695) 00:15:01.041 fused_ordering(696) 00:15:01.041 fused_ordering(697) 00:15:01.041 fused_ordering(698) 00:15:01.041 fused_ordering(699) 00:15:01.041 fused_ordering(700) 00:15:01.041 fused_ordering(701) 00:15:01.041 fused_ordering(702) 00:15:01.041 fused_ordering(703) 00:15:01.041 fused_ordering(704) 00:15:01.041 fused_ordering(705) 00:15:01.041 fused_ordering(706) 00:15:01.041 fused_ordering(707) 00:15:01.041 fused_ordering(708) 00:15:01.041 fused_ordering(709) 00:15:01.041 fused_ordering(710) 00:15:01.041 fused_ordering(711) 00:15:01.041 fused_ordering(712) 00:15:01.041 fused_ordering(713) 00:15:01.041 fused_ordering(714) 00:15:01.041 fused_ordering(715) 00:15:01.041 fused_ordering(716) 00:15:01.041 fused_ordering(717) 00:15:01.041 fused_ordering(718) 00:15:01.041 fused_ordering(719) 00:15:01.041 fused_ordering(720) 00:15:01.041 fused_ordering(721) 00:15:01.041 fused_ordering(722) 00:15:01.041 fused_ordering(723) 00:15:01.041 fused_ordering(724) 00:15:01.041 fused_ordering(725) 00:15:01.041 fused_ordering(726) 00:15:01.041 fused_ordering(727) 00:15:01.041 fused_ordering(728) 00:15:01.041 fused_ordering(729) 00:15:01.041 fused_ordering(730) 00:15:01.041 fused_ordering(731) 00:15:01.041 fused_ordering(732) 00:15:01.041 fused_ordering(733) 00:15:01.041 fused_ordering(734) 00:15:01.041 fused_ordering(735) 00:15:01.041 fused_ordering(736) 00:15:01.041 fused_ordering(737) 00:15:01.041 fused_ordering(738) 00:15:01.041 fused_ordering(739) 00:15:01.041 fused_ordering(740) 00:15:01.041 fused_ordering(741) 00:15:01.041 fused_ordering(742) 00:15:01.041 fused_ordering(743) 00:15:01.041 fused_ordering(744) 00:15:01.041 fused_ordering(745) 00:15:01.041 fused_ordering(746) 00:15:01.041 fused_ordering(747) 00:15:01.041 fused_ordering(748) 00:15:01.041 fused_ordering(749) 00:15:01.041 fused_ordering(750) 00:15:01.041 fused_ordering(751) 00:15:01.041 fused_ordering(752) 00:15:01.041 fused_ordering(753) 00:15:01.041 fused_ordering(754) 00:15:01.041 fused_ordering(755) 00:15:01.041 fused_ordering(756) 00:15:01.041 fused_ordering(757) 00:15:01.041 fused_ordering(758) 00:15:01.041 fused_ordering(759) 00:15:01.041 fused_ordering(760) 00:15:01.041 fused_ordering(761) 00:15:01.041 fused_ordering(762) 00:15:01.041 fused_ordering(763) 00:15:01.041 fused_ordering(764) 00:15:01.041 fused_ordering(765) 00:15:01.041 fused_ordering(766) 00:15:01.041 fused_ordering(767) 00:15:01.041 fused_ordering(768) 00:15:01.041 fused_ordering(769) 00:15:01.041 fused_ordering(770) 00:15:01.041 fused_ordering(771) 00:15:01.041 fused_ordering(772) 00:15:01.041 fused_ordering(773) 00:15:01.041 fused_ordering(774) 00:15:01.041 fused_ordering(775) 00:15:01.041 fused_ordering(776) 00:15:01.041 fused_ordering(777) 00:15:01.041 fused_ordering(778) 00:15:01.041 fused_ordering(779) 00:15:01.041 fused_ordering(780) 00:15:01.041 fused_ordering(781) 00:15:01.041 fused_ordering(782) 00:15:01.041 fused_ordering(783) 00:15:01.041 fused_ordering(784) 00:15:01.041 fused_ordering(785) 00:15:01.041 fused_ordering(786) 00:15:01.041 fused_ordering(787) 00:15:01.041 fused_ordering(788) 00:15:01.041 fused_ordering(789) 00:15:01.041 fused_ordering(790) 00:15:01.041 fused_ordering(791) 00:15:01.041 fused_ordering(792) 00:15:01.041 fused_ordering(793) 00:15:01.041 fused_ordering(794) 00:15:01.041 fused_ordering(795) 00:15:01.041 fused_ordering(796) 00:15:01.041 fused_ordering(797) 00:15:01.041 fused_ordering(798) 00:15:01.041 fused_ordering(799) 00:15:01.041 fused_ordering(800) 00:15:01.041 fused_ordering(801) 00:15:01.041 fused_ordering(802) 00:15:01.041 fused_ordering(803) 00:15:01.041 fused_ordering(804) 00:15:01.041 fused_ordering(805) 00:15:01.041 fused_ordering(806) 00:15:01.041 fused_ordering(807) 00:15:01.041 fused_ordering(808) 00:15:01.041 fused_ordering(809) 00:15:01.041 fused_ordering(810) 00:15:01.041 fused_ordering(811) 00:15:01.041 fused_ordering(812) 00:15:01.041 fused_ordering(813) 00:15:01.041 fused_ordering(814) 00:15:01.041 fused_ordering(815) 00:15:01.041 fused_ordering(816) 00:15:01.041 fused_ordering(817) 00:15:01.041 fused_ordering(818) 00:15:01.041 fused_ordering(819) 00:15:01.041 fused_ordering(820) 00:15:01.664 fused_ordering(821) 00:15:01.664 fused_ordering(822) 00:15:01.664 fused_ordering(823) 00:15:01.664 fused_ordering(824) 00:15:01.664 fused_ordering(825) 00:15:01.664 fused_ordering(826) 00:15:01.664 fused_ordering(827) 00:15:01.664 fused_ordering(828) 00:15:01.664 fused_ordering(829) 00:15:01.664 fused_ordering(830) 00:15:01.664 fused_ordering(831) 00:15:01.664 fused_ordering(832) 00:15:01.664 fused_ordering(833) 00:15:01.664 fused_ordering(834) 00:15:01.664 fused_ordering(835) 00:15:01.664 fused_ordering(836) 00:15:01.664 fused_ordering(837) 00:15:01.664 fused_ordering(838) 00:15:01.664 fused_ordering(839) 00:15:01.664 fused_ordering(840) 00:15:01.664 fused_ordering(841) 00:15:01.664 fused_ordering(842) 00:15:01.664 fused_ordering(843) 00:15:01.664 fused_ordering(844) 00:15:01.664 fused_ordering(845) 00:15:01.664 fused_ordering(846) 00:15:01.664 fused_ordering(847) 00:15:01.664 fused_ordering(848) 00:15:01.664 fused_ordering(849) 00:15:01.664 fused_ordering(850) 00:15:01.664 fused_ordering(851) 00:15:01.664 fused_ordering(852) 00:15:01.664 fused_ordering(853) 00:15:01.664 fused_ordering(854) 00:15:01.664 fused_ordering(855) 00:15:01.664 fused_ordering(856) 00:15:01.664 fused_ordering(857) 00:15:01.664 fused_ordering(858) 00:15:01.664 fused_ordering(859) 00:15:01.664 fused_ordering(860) 00:15:01.664 fused_ordering(861) 00:15:01.664 fused_ordering(862) 00:15:01.664 fused_ordering(863) 00:15:01.664 fused_ordering(864) 00:15:01.664 fused_ordering(865) 00:15:01.664 fused_ordering(866) 00:15:01.664 fused_ordering(867) 00:15:01.664 fused_ordering(868) 00:15:01.664 fused_ordering(869) 00:15:01.664 fused_ordering(870) 00:15:01.664 fused_ordering(871) 00:15:01.664 fused_ordering(872) 00:15:01.664 fused_ordering(873) 00:15:01.664 fused_ordering(874) 00:15:01.664 fused_ordering(875) 00:15:01.664 fused_ordering(876) 00:15:01.664 fused_ordering(877) 00:15:01.664 fused_ordering(878) 00:15:01.664 fused_ordering(879) 00:15:01.664 fused_ordering(880) 00:15:01.664 fused_ordering(881) 00:15:01.664 fused_ordering(882) 00:15:01.664 fused_ordering(883) 00:15:01.664 fused_ordering(884) 00:15:01.664 fused_ordering(885) 00:15:01.664 fused_ordering(886) 00:15:01.664 fused_ordering(887) 00:15:01.664 fused_ordering(888) 00:15:01.664 fused_ordering(889) 00:15:01.664 fused_ordering(890) 00:15:01.664 fused_ordering(891) 00:15:01.664 fused_ordering(892) 00:15:01.665 fused_ordering(893) 00:15:01.665 fused_ordering(894) 00:15:01.665 fused_ordering(895) 00:15:01.665 fused_ordering(896) 00:15:01.665 fused_ordering(897) 00:15:01.665 fused_ordering(898) 00:15:01.665 fused_ordering(899) 00:15:01.665 fused_ordering(900) 00:15:01.665 fused_ordering(901) 00:15:01.665 fused_ordering(902) 00:15:01.665 fused_ordering(903) 00:15:01.665 fused_ordering(904) 00:15:01.665 fused_ordering(905) 00:15:01.665 fused_ordering(906) 00:15:01.665 fused_ordering(907) 00:15:01.665 fused_ordering(908) 00:15:01.665 fused_ordering(909) 00:15:01.665 fused_ordering(910) 00:15:01.665 fused_ordering(911) 00:15:01.665 fused_ordering(912) 00:15:01.665 fused_ordering(913) 00:15:01.665 fused_ordering(914) 00:15:01.665 fused_ordering(915) 00:15:01.665 fused_ordering(916) 00:15:01.665 fused_ordering(917) 00:15:01.665 fused_ordering(918) 00:15:01.665 fused_ordering(919) 00:15:01.665 fused_ordering(920) 00:15:01.665 fused_ordering(921) 00:15:01.665 fused_ordering(922) 00:15:01.665 fused_ordering(923) 00:15:01.665 fused_ordering(924) 00:15:01.665 fused_ordering(925) 00:15:01.665 fused_ordering(926) 00:15:01.665 fused_ordering(927) 00:15:01.665 fused_ordering(928) 00:15:01.665 fused_ordering(929) 00:15:01.665 fused_ordering(930) 00:15:01.665 fused_ordering(931) 00:15:01.665 fused_ordering(932) 00:15:01.665 fused_ordering(933) 00:15:01.665 fused_ordering(934) 00:15:01.665 fused_ordering(935) 00:15:01.665 fused_ordering(936) 00:15:01.665 fused_ordering(937) 00:15:01.665 fused_ordering(938) 00:15:01.665 fused_ordering(939) 00:15:01.665 fused_ordering(940) 00:15:01.665 fused_ordering(941) 00:15:01.665 fused_ordering(942) 00:15:01.665 fused_ordering(943) 00:15:01.665 fused_ordering(944) 00:15:01.665 fused_ordering(945) 00:15:01.665 fused_ordering(946) 00:15:01.665 fused_ordering(947) 00:15:01.665 fused_ordering(948) 00:15:01.665 fused_ordering(949) 00:15:01.665 fused_ordering(950) 00:15:01.665 fused_ordering(951) 00:15:01.665 fused_ordering(952) 00:15:01.665 fused_ordering(953) 00:15:01.665 fused_ordering(954) 00:15:01.665 fused_ordering(955) 00:15:01.665 fused_ordering(956) 00:15:01.665 fused_ordering(957) 00:15:01.665 fused_ordering(958) 00:15:01.665 fused_ordering(959) 00:15:01.665 fused_ordering(960) 00:15:01.665 fused_ordering(961) 00:15:01.665 fused_ordering(962) 00:15:01.665 fused_ordering(963) 00:15:01.665 fused_ordering(964) 00:15:01.665 fused_ordering(965) 00:15:01.665 fused_ordering(966) 00:15:01.665 fused_ordering(967) 00:15:01.665 fused_ordering(968) 00:15:01.665 fused_ordering(969) 00:15:01.665 fused_ordering(970) 00:15:01.665 fused_ordering(971) 00:15:01.665 fused_ordering(972) 00:15:01.665 fused_ordering(973) 00:15:01.665 fused_ordering(974) 00:15:01.665 fused_ordering(975) 00:15:01.665 fused_ordering(976) 00:15:01.665 fused_ordering(977) 00:15:01.665 fused_ordering(978) 00:15:01.665 fused_ordering(979) 00:15:01.665 fused_ordering(980) 00:15:01.665 fused_ordering(981) 00:15:01.665 fused_ordering(982) 00:15:01.665 fused_ordering(983) 00:15:01.665 fused_ordering(984) 00:15:01.665 fused_ordering(985) 00:15:01.665 fused_ordering(986) 00:15:01.665 fused_ordering(987) 00:15:01.665 fused_ordering(988) 00:15:01.665 fused_ordering(989) 00:15:01.665 fused_ordering(990) 00:15:01.665 fused_ordering(991) 00:15:01.665 fused_ordering(992) 00:15:01.665 fused_ordering(993) 00:15:01.665 fused_ordering(994) 00:15:01.665 fused_ordering(995) 00:15:01.665 fused_ordering(996) 00:15:01.665 fused_ordering(997) 00:15:01.665 fused_ordering(998) 00:15:01.665 fused_ordering(999) 00:15:01.665 fused_ordering(1000) 00:15:01.665 fused_ordering(1001) 00:15:01.665 fused_ordering(1002) 00:15:01.665 fused_ordering(1003) 00:15:01.665 fused_ordering(1004) 00:15:01.665 fused_ordering(1005) 00:15:01.665 fused_ordering(1006) 00:15:01.665 fused_ordering(1007) 00:15:01.665 fused_ordering(1008) 00:15:01.665 fused_ordering(1009) 00:15:01.665 fused_ordering(1010) 00:15:01.665 fused_ordering(1011) 00:15:01.665 fused_ordering(1012) 00:15:01.665 fused_ordering(1013) 00:15:01.665 fused_ordering(1014) 00:15:01.665 fused_ordering(1015) 00:15:01.665 fused_ordering(1016) 00:15:01.665 fused_ordering(1017) 00:15:01.665 fused_ordering(1018) 00:15:01.665 fused_ordering(1019) 00:15:01.665 fused_ordering(1020) 00:15:01.665 fused_ordering(1021) 00:15:01.665 fused_ordering(1022) 00:15:01.665 fused_ordering(1023) 00:15:01.665 05:30:04 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:01.665 05:30:04 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:01.665 05:30:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:01.665 05:30:04 -- nvmf/common.sh@116 -- # sync 00:15:01.665 05:30:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:01.665 05:30:04 -- nvmf/common.sh@119 -- # set +e 00:15:01.665 05:30:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:01.665 05:30:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:01.665 rmmod nvme_tcp 00:15:01.665 rmmod nvme_fabrics 00:15:01.665 rmmod nvme_keyring 00:15:01.665 05:30:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:01.665 05:30:04 -- nvmf/common.sh@123 -- # set -e 00:15:01.665 05:30:04 -- nvmf/common.sh@124 -- # return 0 00:15:01.665 05:30:04 -- nvmf/common.sh@477 -- # '[' -n 1755891 ']' 00:15:01.665 05:30:04 -- nvmf/common.sh@478 -- # killprocess 1755891 00:15:01.665 05:30:04 -- common/autotest_common.sh@936 -- # '[' -z 1755891 ']' 00:15:01.665 05:30:04 -- common/autotest_common.sh@940 -- # kill -0 1755891 00:15:01.665 05:30:04 -- common/autotest_common.sh@941 -- # uname 00:15:01.665 05:30:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:01.665 05:30:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1755891 00:15:01.665 05:30:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:01.665 05:30:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:01.665 05:30:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1755891' 00:15:01.665 killing process with pid 1755891 00:15:01.665 05:30:04 -- common/autotest_common.sh@955 -- # kill 1755891 00:15:01.665 05:30:04 -- common/autotest_common.sh@960 -- # wait 1755891 00:15:01.926 05:30:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:01.926 05:30:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:01.926 05:30:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:01.926 05:30:04 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:01.926 05:30:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:01.926 05:30:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.926 05:30:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:01.926 05:30:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.835 05:30:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:03.835 00:15:03.835 real 0m13.165s 00:15:03.835 user 0m6.897s 00:15:03.835 sys 0m6.813s 00:15:03.835 05:30:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:03.835 05:30:07 -- common/autotest_common.sh@10 -- # set +x 00:15:03.835 ************************************ 00:15:03.835 END TEST nvmf_fused_ordering 00:15:03.835 ************************************ 00:15:04.096 05:30:07 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:04.096 05:30:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:04.096 05:30:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:04.096 05:30:07 -- common/autotest_common.sh@10 -- # set +x 00:15:04.096 ************************************ 00:15:04.096 START TEST nvmf_delete_subsystem 00:15:04.096 ************************************ 00:15:04.096 05:30:07 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:04.096 * Looking for test storage... 00:15:04.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:04.096 05:30:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:04.096 05:30:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:04.096 05:30:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:04.096 05:30:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:04.096 05:30:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:04.096 05:30:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:04.096 05:30:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:04.096 05:30:07 -- scripts/common.sh@335 -- # IFS=.-: 00:15:04.096 05:30:07 -- scripts/common.sh@335 -- # read -ra ver1 00:15:04.096 05:30:07 -- scripts/common.sh@336 -- # IFS=.-: 00:15:04.096 05:30:07 -- scripts/common.sh@336 -- # read -ra ver2 00:15:04.096 05:30:07 -- scripts/common.sh@337 -- # local 'op=<' 00:15:04.096 05:30:07 -- scripts/common.sh@339 -- # ver1_l=2 00:15:04.096 05:30:07 -- scripts/common.sh@340 -- # ver2_l=1 00:15:04.096 05:30:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:04.096 05:30:07 -- scripts/common.sh@343 -- # case "$op" in 00:15:04.096 05:30:07 -- scripts/common.sh@344 -- # : 1 00:15:04.096 05:30:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:04.096 05:30:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:04.096 05:30:07 -- scripts/common.sh@364 -- # decimal 1 00:15:04.096 05:30:07 -- scripts/common.sh@352 -- # local d=1 00:15:04.096 05:30:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:04.096 05:30:07 -- scripts/common.sh@354 -- # echo 1 00:15:04.096 05:30:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:04.096 05:30:07 -- scripts/common.sh@365 -- # decimal 2 00:15:04.096 05:30:07 -- scripts/common.sh@352 -- # local d=2 00:15:04.096 05:30:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:04.096 05:30:07 -- scripts/common.sh@354 -- # echo 2 00:15:04.096 05:30:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:04.096 05:30:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:04.096 05:30:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:04.096 05:30:07 -- scripts/common.sh@367 -- # return 0 00:15:04.096 05:30:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:04.096 05:30:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:04.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.097 --rc genhtml_branch_coverage=1 00:15:04.097 --rc genhtml_function_coverage=1 00:15:04.097 --rc genhtml_legend=1 00:15:04.097 --rc geninfo_all_blocks=1 00:15:04.097 --rc geninfo_unexecuted_blocks=1 00:15:04.097 00:15:04.097 ' 00:15:04.097 05:30:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:04.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.097 --rc genhtml_branch_coverage=1 00:15:04.097 --rc genhtml_function_coverage=1 00:15:04.097 --rc genhtml_legend=1 00:15:04.097 --rc geninfo_all_blocks=1 00:15:04.097 --rc geninfo_unexecuted_blocks=1 00:15:04.097 00:15:04.097 ' 00:15:04.097 05:30:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:04.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.097 --rc genhtml_branch_coverage=1 00:15:04.097 --rc genhtml_function_coverage=1 00:15:04.097 --rc genhtml_legend=1 00:15:04.097 --rc geninfo_all_blocks=1 00:15:04.097 --rc geninfo_unexecuted_blocks=1 00:15:04.097 00:15:04.097 ' 00:15:04.097 05:30:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:04.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.097 --rc genhtml_branch_coverage=1 00:15:04.097 --rc genhtml_function_coverage=1 00:15:04.097 --rc genhtml_legend=1 00:15:04.097 --rc geninfo_all_blocks=1 00:15:04.097 --rc geninfo_unexecuted_blocks=1 00:15:04.097 00:15:04.097 ' 00:15:04.097 05:30:07 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:04.097 05:30:07 -- nvmf/common.sh@7 -- # uname -s 00:15:04.097 05:30:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:04.097 05:30:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:04.097 05:30:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:04.097 05:30:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:04.097 05:30:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:04.097 05:30:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:04.097 05:30:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:04.097 05:30:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:04.097 05:30:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:04.097 05:30:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:04.097 05:30:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:04.097 05:30:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:04.097 05:30:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:04.097 05:30:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:04.097 05:30:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:04.097 05:30:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:04.097 05:30:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:04.097 05:30:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:04.097 05:30:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:04.097 05:30:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.097 05:30:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.097 05:30:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.097 05:30:07 -- paths/export.sh@5 -- # export PATH 00:15:04.097 05:30:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.097 05:30:07 -- nvmf/common.sh@46 -- # : 0 00:15:04.097 05:30:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:04.097 05:30:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:04.097 05:30:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:04.097 05:30:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:04.097 05:30:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:04.097 05:30:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:04.097 05:30:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:04.097 05:30:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:04.097 05:30:07 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:15:04.097 05:30:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:04.097 05:30:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:04.097 05:30:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:04.097 05:30:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:04.097 05:30:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:04.360 05:30:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.360 05:30:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:04.360 05:30:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.360 05:30:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:04.360 05:30:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:04.360 05:30:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:04.360 05:30:07 -- common/autotest_common.sh@10 -- # set +x 00:15:12.505 05:30:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:12.506 05:30:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:12.506 05:30:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:12.506 05:30:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:12.506 05:30:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:12.506 05:30:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:12.506 05:30:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:12.506 05:30:14 -- nvmf/common.sh@294 -- # net_devs=() 00:15:12.506 05:30:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:12.506 05:30:14 -- nvmf/common.sh@295 -- # e810=() 00:15:12.506 05:30:14 -- nvmf/common.sh@295 -- # local -ga e810 00:15:12.506 05:30:14 -- nvmf/common.sh@296 -- # x722=() 00:15:12.506 05:30:14 -- nvmf/common.sh@296 -- # local -ga x722 00:15:12.506 05:30:14 -- nvmf/common.sh@297 -- # mlx=() 00:15:12.506 05:30:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:12.506 05:30:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:12.506 05:30:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:12.506 05:30:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:12.506 05:30:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:12.506 05:30:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:12.506 05:30:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:12.506 05:30:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:12.506 05:30:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:12.506 05:30:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:12.506 05:30:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:12.506 05:30:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:12.506 05:30:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:12.506 05:30:14 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:12.506 05:30:14 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:12.506 05:30:14 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:12.506 05:30:14 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:12.506 05:30:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:12.506 05:30:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:12.506 05:30:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:12.506 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:12.506 05:30:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:12.506 05:30:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:12.506 05:30:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:12.506 05:30:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:12.506 05:30:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:12.506 05:30:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:12.506 05:30:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:12.506 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:12.506 05:30:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:12.506 05:30:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:12.506 05:30:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:12.506 05:30:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:12.506 05:30:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:12.506 05:30:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:12.506 05:30:14 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:12.506 05:30:14 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:12.506 05:30:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:12.506 05:30:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.506 05:30:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:12.506 05:30:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.506 05:30:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:12.506 Found net devices under 0000:31:00.0: cvl_0_0 00:15:12.506 05:30:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.506 05:30:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:12.506 05:30:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.506 05:30:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:12.506 05:30:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.506 05:30:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:12.506 Found net devices under 0000:31:00.1: cvl_0_1 00:15:12.506 05:30:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.506 05:30:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:12.506 05:30:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:12.506 05:30:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:12.506 05:30:14 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:12.506 05:30:14 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:12.506 05:30:14 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:12.506 05:30:14 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:12.506 05:30:14 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:12.506 05:30:14 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:12.506 05:30:14 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:12.506 05:30:14 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:12.506 05:30:14 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:12.506 05:30:14 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:12.506 05:30:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:12.506 05:30:14 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:12.506 05:30:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:12.506 05:30:14 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:12.506 05:30:14 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:12.506 05:30:14 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:12.506 05:30:14 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:12.506 05:30:14 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:12.506 05:30:14 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:12.506 05:30:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:12.506 05:30:14 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:12.506 05:30:14 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:12.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:12.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:15:12.506 00:15:12.506 --- 10.0.0.2 ping statistics --- 00:15:12.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.506 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:15:12.506 05:30:14 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:12.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:12.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:15:12.506 00:15:12.506 --- 10.0.0.1 ping statistics --- 00:15:12.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.506 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:15:12.506 05:30:14 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:12.506 05:30:14 -- nvmf/common.sh@410 -- # return 0 00:15:12.506 05:30:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:12.506 05:30:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:12.506 05:30:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:12.506 05:30:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:12.506 05:30:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:12.506 05:30:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:12.506 05:30:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:12.506 05:30:14 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:15:12.506 05:30:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:12.506 05:30:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:12.506 05:30:14 -- common/autotest_common.sh@10 -- # set +x 00:15:12.506 05:30:14 -- nvmf/common.sh@469 -- # nvmfpid=1761357 00:15:12.506 05:30:14 -- nvmf/common.sh@470 -- # waitforlisten 1761357 00:15:12.506 05:30:14 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:12.506 05:30:14 -- common/autotest_common.sh@829 -- # '[' -z 1761357 ']' 00:15:12.506 05:30:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.506 05:30:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:12.506 05:30:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.506 05:30:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:12.506 05:30:14 -- common/autotest_common.sh@10 -- # set +x 00:15:12.506 [2024-12-07 05:30:14.994466] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:12.506 [2024-12-07 05:30:14.994529] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.506 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.506 [2024-12-07 05:30:15.072837] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:12.506 [2024-12-07 05:30:15.135942] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:12.506 [2024-12-07 05:30:15.136070] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.506 [2024-12-07 05:30:15.136080] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.506 [2024-12-07 05:30:15.136088] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.506 [2024-12-07 05:30:15.139027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.506 [2024-12-07 05:30:15.139047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.768 05:30:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:12.768 05:30:15 -- common/autotest_common.sh@862 -- # return 0 00:15:12.768 05:30:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:12.768 05:30:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:12.768 05:30:15 -- common/autotest_common.sh@10 -- # set +x 00:15:12.768 05:30:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.768 05:30:15 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:12.768 05:30:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.768 05:30:15 -- common/autotest_common.sh@10 -- # set +x 00:15:12.768 [2024-12-07 05:30:15.879299] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:12.768 05:30:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.768 05:30:15 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:12.768 05:30:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.768 05:30:15 -- common/autotest_common.sh@10 -- # set +x 00:15:12.768 05:30:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.768 05:30:15 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:12.768 05:30:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.768 05:30:15 -- common/autotest_common.sh@10 -- # set +x 00:15:12.768 [2024-12-07 05:30:15.903540] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:12.768 05:30:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.768 05:30:15 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:12.768 05:30:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.768 05:30:15 -- common/autotest_common.sh@10 -- # set +x 00:15:12.768 NULL1 00:15:12.768 05:30:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.768 05:30:15 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:12.768 05:30:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.768 05:30:15 -- common/autotest_common.sh@10 -- # set +x 00:15:12.768 Delay0 00:15:12.768 05:30:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.768 05:30:15 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:12.768 05:30:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.768 05:30:15 -- common/autotest_common.sh@10 -- # set +x 00:15:12.768 05:30:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.768 05:30:15 -- target/delete_subsystem.sh@28 -- # perf_pid=1761678 00:15:12.768 05:30:15 -- target/delete_subsystem.sh@30 -- # sleep 2 00:15:12.768 05:30:15 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:12.768 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.768 [2024-12-07 05:30:16.000124] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:15.326 05:30:17 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:15.326 05:30:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.326 05:30:17 -- common/autotest_common.sh@10 -- # set +x 00:15:15.326 Read completed with error (sct=0, sc=8) 00:15:15.326 Write completed with error (sct=0, sc=8) 00:15:15.326 Write completed with error (sct=0, sc=8) 00:15:15.326 Read completed with error (sct=0, sc=8) 00:15:15.326 starting I/O failed: -6 00:15:15.326 Write completed with error (sct=0, sc=8) 00:15:15.326 Write completed with error (sct=0, sc=8) 00:15:15.326 Write completed with error (sct=0, sc=8) 00:15:15.326 Read completed with error (sct=0, sc=8) 00:15:15.326 starting I/O failed: -6 00:15:15.326 Read completed with error (sct=0, sc=8) 00:15:15.326 Read completed with error (sct=0, sc=8) 00:15:15.326 Read completed with error (sct=0, sc=8) 00:15:15.326 Read completed with error (sct=0, sc=8) 00:15:15.326 starting I/O failed: -6 00:15:15.326 Read completed with error (sct=0, sc=8) 00:15:15.326 Read completed with error (sct=0, sc=8) 00:15:15.326 Write completed with error (sct=0, sc=8) 00:15:15.326 Read completed with error (sct=0, sc=8) 00:15:15.326 starting I/O failed: -6 00:15:15.326 Read completed with error (sct=0, sc=8) 00:15:15.326 Read completed with error (sct=0, sc=8) 00:15:15.326 Write completed with error (sct=0, sc=8) 00:15:15.326 Read completed with error (sct=0, sc=8) 00:15:15.326 starting I/O failed: -6 00:15:15.326 Read completed with error (sct=0, sc=8) 00:15:15.326 Read completed with error (sct=0, sc=8) 00:15:15.326 Read completed with error (sct=0, sc=8) 00:15:15.326 Read completed with error (sct=0, sc=8) 00:15:15.326 starting I/O failed: -6 00:15:15.326 Read completed with error (sct=0, sc=8) 00:15:15.326 Read completed with error (sct=0, sc=8) 00:15:15.326 Write completed with error (sct=0, sc=8) 00:15:15.326 Write completed with error (sct=0, sc=8) 00:15:15.326 starting I/O failed: -6 00:15:15.326 Write completed with error (sct=0, sc=8) 00:15:15.326 Read completed with error (sct=0, sc=8) 00:15:15.326 Read completed with error (sct=0, sc=8) 00:15:15.326 Write completed with error (sct=0, sc=8) 00:15:15.326 starting I/O failed: -6 00:15:15.326 Write completed with error (sct=0, sc=8) 00:15:15.326 Read completed with error (sct=0, sc=8) 00:15:15.326 Read completed with error (sct=0, sc=8) 00:15:15.326 Write completed with error (sct=0, sc=8) 00:15:15.326 starting I/O failed: -6 00:15:15.326 Write completed with error (sct=0, sc=8) 00:15:15.326 Write completed with error (sct=0, sc=8) 00:15:15.326 Read completed with error (sct=0, sc=8) 00:15:15.326 Write completed with error (sct=0, sc=8) 00:15:15.326 starting I/O failed: -6 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 [2024-12-07 05:30:18.163733] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1779930 is same with the state(5) to be set 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 starting I/O failed: -6 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 starting I/O failed: -6 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 starting I/O failed: -6 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 starting I/O failed: -6 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 starting I/O failed: -6 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 starting I/O failed: -6 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 starting I/O failed: -6 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 starting I/O failed: -6 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 starting I/O failed: -6 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 starting I/O failed: -6 00:15:15.327 [2024-12-07 05:30:18.168921] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd74c00c350 is same with the state(5) to be set 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 Write completed with error (sct=0, sc=8) 00:15:15.327 Read completed with error (sct=0, sc=8) 00:15:15.327 [2024-12-07 05:30:18.169593] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd74c000c00 is same with the state(5) to be set 00:15:16.268 [2024-12-07 05:30:19.137927] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793b90 is same with the state(5) to be set 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Write completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Write completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 [2024-12-07 05:30:19.166973] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1779ab0 is same with the state(5) to be set 00:15:16.268 Write completed with error (sct=0, sc=8) 00:15:16.268 Write completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Write completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Write completed with error (sct=0, sc=8) 00:15:16.268 Write completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 [2024-12-07 05:30:19.167301] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178dde0 is same with the state(5) to be set 00:15:16.268 Write completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Write completed with error (sct=0, sc=8) 00:15:16.268 Write completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Write completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Write completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 [2024-12-07 05:30:19.170225] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd74c00bf20 is same with the state(5) to be set 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Write completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Write completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Write completed with error (sct=0, sc=8) 00:15:16.268 Write completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Write completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 Read completed with error (sct=0, sc=8) 00:15:16.268 [2024-12-07 05:30:19.171996] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd74c00c600 is same with the state(5) to be set 00:15:16.268 [2024-12-07 05:30:19.172487] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1793b90 (9): Bad file descriptor 00:15:16.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:16.268 05:30:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.269 05:30:19 -- target/delete_subsystem.sh@34 -- # delay=0 00:15:16.269 05:30:19 -- target/delete_subsystem.sh@35 -- # kill -0 1761678 00:15:16.269 05:30:19 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:16.269 Initializing NVMe Controllers 00:15:16.269 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:16.269 Controller IO queue size 128, less than required. 00:15:16.269 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:16.269 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:16.269 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:16.269 Initialization complete. Launching workers. 00:15:16.269 ======================================================== 00:15:16.269 Latency(us) 00:15:16.269 Device Information : IOPS MiB/s Average min max 00:15:16.269 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 161.92 0.08 910636.20 214.04 1005691.82 00:15:16.269 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.94 0.08 928079.54 539.14 1010207.48 00:15:16.269 ======================================================== 00:15:16.269 Total : 317.85 0.16 919193.82 214.04 1010207.48 00:15:16.269 00:15:16.530 05:30:19 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:16.530 05:30:19 -- target/delete_subsystem.sh@35 -- # kill -0 1761678 00:15:16.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1761678) - No such process 00:15:16.530 05:30:19 -- target/delete_subsystem.sh@45 -- # NOT wait 1761678 00:15:16.530 05:30:19 -- common/autotest_common.sh@650 -- # local es=0 00:15:16.530 05:30:19 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1761678 00:15:16.530 05:30:19 -- common/autotest_common.sh@638 -- # local arg=wait 00:15:16.530 05:30:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:16.530 05:30:19 -- common/autotest_common.sh@642 -- # type -t wait 00:15:16.530 05:30:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:16.530 05:30:19 -- common/autotest_common.sh@653 -- # wait 1761678 00:15:16.530 05:30:19 -- common/autotest_common.sh@653 -- # es=1 00:15:16.530 05:30:19 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:16.530 05:30:19 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:16.530 05:30:19 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:16.530 05:30:19 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:16.530 05:30:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.530 05:30:19 -- common/autotest_common.sh@10 -- # set +x 00:15:16.530 05:30:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.530 05:30:19 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:16.530 05:30:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.530 05:30:19 -- common/autotest_common.sh@10 -- # set +x 00:15:16.530 [2024-12-07 05:30:19.703987] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:16.530 05:30:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.530 05:30:19 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:16.530 05:30:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.530 05:30:19 -- common/autotest_common.sh@10 -- # set +x 00:15:16.530 05:30:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.530 05:30:19 -- target/delete_subsystem.sh@54 -- # perf_pid=1762399 00:15:16.530 05:30:19 -- target/delete_subsystem.sh@56 -- # delay=0 00:15:16.530 05:30:19 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:16.530 05:30:19 -- target/delete_subsystem.sh@57 -- # kill -0 1762399 00:15:16.530 05:30:19 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:16.530 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.791 [2024-12-07 05:30:19.770657] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:17.051 05:30:20 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:17.051 05:30:20 -- target/delete_subsystem.sh@57 -- # kill -0 1762399 00:15:17.051 05:30:20 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:17.624 05:30:20 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:17.624 05:30:20 -- target/delete_subsystem.sh@57 -- # kill -0 1762399 00:15:17.624 05:30:20 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:18.197 05:30:21 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:18.197 05:30:21 -- target/delete_subsystem.sh@57 -- # kill -0 1762399 00:15:18.197 05:30:21 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:18.767 05:30:21 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:18.767 05:30:21 -- target/delete_subsystem.sh@57 -- # kill -0 1762399 00:15:18.767 05:30:21 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:19.028 05:30:22 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:19.028 05:30:22 -- target/delete_subsystem.sh@57 -- # kill -0 1762399 00:15:19.028 05:30:22 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:19.601 05:30:22 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:19.601 05:30:22 -- target/delete_subsystem.sh@57 -- # kill -0 1762399 00:15:19.601 05:30:22 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:19.863 Initializing NVMe Controllers 00:15:19.863 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:19.863 Controller IO queue size 128, less than required. 00:15:19.863 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:19.863 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:19.863 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:19.863 Initialization complete. Launching workers. 00:15:19.863 ======================================================== 00:15:19.863 Latency(us) 00:15:19.863 Device Information : IOPS MiB/s Average min max 00:15:19.863 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002208.00 1000196.22 1041038.46 00:15:19.863 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003142.96 1000278.00 1010170.84 00:15:19.863 ======================================================== 00:15:19.863 Total : 256.00 0.12 1002675.48 1000196.22 1041038.46 00:15:19.863 00:15:20.124 05:30:23 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:20.124 05:30:23 -- target/delete_subsystem.sh@57 -- # kill -0 1762399 00:15:20.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1762399) - No such process 00:15:20.124 05:30:23 -- target/delete_subsystem.sh@67 -- # wait 1762399 00:15:20.124 05:30:23 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:20.124 05:30:23 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:20.124 05:30:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:20.124 05:30:23 -- nvmf/common.sh@116 -- # sync 00:15:20.124 05:30:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:20.124 05:30:23 -- nvmf/common.sh@119 -- # set +e 00:15:20.124 05:30:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:20.124 05:30:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:20.124 rmmod nvme_tcp 00:15:20.124 rmmod nvme_fabrics 00:15:20.124 rmmod nvme_keyring 00:15:20.124 05:30:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:20.124 05:30:23 -- nvmf/common.sh@123 -- # set -e 00:15:20.124 05:30:23 -- nvmf/common.sh@124 -- # return 0 00:15:20.124 05:30:23 -- nvmf/common.sh@477 -- # '[' -n 1761357 ']' 00:15:20.124 05:30:23 -- nvmf/common.sh@478 -- # killprocess 1761357 00:15:20.124 05:30:23 -- common/autotest_common.sh@936 -- # '[' -z 1761357 ']' 00:15:20.124 05:30:23 -- common/autotest_common.sh@940 -- # kill -0 1761357 00:15:20.124 05:30:23 -- common/autotest_common.sh@941 -- # uname 00:15:20.124 05:30:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:20.124 05:30:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1761357 00:15:20.384 05:30:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:20.384 05:30:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:20.384 05:30:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1761357' 00:15:20.384 killing process with pid 1761357 00:15:20.384 05:30:23 -- common/autotest_common.sh@955 -- # kill 1761357 00:15:20.384 05:30:23 -- common/autotest_common.sh@960 -- # wait 1761357 00:15:20.384 05:30:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:20.384 05:30:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:20.384 05:30:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:20.384 05:30:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:20.385 05:30:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:20.385 05:30:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.385 05:30:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:20.385 05:30:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:22.928 05:30:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:22.928 00:15:22.928 real 0m18.483s 00:15:22.928 user 0m30.786s 00:15:22.928 sys 0m6.825s 00:15:22.928 05:30:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:22.928 05:30:25 -- common/autotest_common.sh@10 -- # set +x 00:15:22.928 ************************************ 00:15:22.928 END TEST nvmf_delete_subsystem 00:15:22.928 ************************************ 00:15:22.928 05:30:25 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:15:22.928 05:30:25 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:22.928 05:30:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:22.928 05:30:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:22.928 05:30:25 -- common/autotest_common.sh@10 -- # set +x 00:15:22.928 ************************************ 00:15:22.928 START TEST nvmf_nvme_cli 00:15:22.928 ************************************ 00:15:22.928 05:30:25 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:22.928 * Looking for test storage... 00:15:22.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:22.928 05:30:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:22.928 05:30:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:22.928 05:30:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:22.928 05:30:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:22.928 05:30:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:22.928 05:30:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:22.928 05:30:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:22.928 05:30:25 -- scripts/common.sh@335 -- # IFS=.-: 00:15:22.928 05:30:25 -- scripts/common.sh@335 -- # read -ra ver1 00:15:22.928 05:30:25 -- scripts/common.sh@336 -- # IFS=.-: 00:15:22.928 05:30:25 -- scripts/common.sh@336 -- # read -ra ver2 00:15:22.928 05:30:25 -- scripts/common.sh@337 -- # local 'op=<' 00:15:22.928 05:30:25 -- scripts/common.sh@339 -- # ver1_l=2 00:15:22.928 05:30:25 -- scripts/common.sh@340 -- # ver2_l=1 00:15:22.928 05:30:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:22.928 05:30:25 -- scripts/common.sh@343 -- # case "$op" in 00:15:22.928 05:30:25 -- scripts/common.sh@344 -- # : 1 00:15:22.928 05:30:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:22.928 05:30:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:22.928 05:30:25 -- scripts/common.sh@364 -- # decimal 1 00:15:22.928 05:30:25 -- scripts/common.sh@352 -- # local d=1 00:15:22.928 05:30:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:22.928 05:30:25 -- scripts/common.sh@354 -- # echo 1 00:15:22.928 05:30:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:22.928 05:30:25 -- scripts/common.sh@365 -- # decimal 2 00:15:22.928 05:30:25 -- scripts/common.sh@352 -- # local d=2 00:15:22.928 05:30:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:22.928 05:30:25 -- scripts/common.sh@354 -- # echo 2 00:15:22.928 05:30:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:22.928 05:30:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:22.928 05:30:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:22.928 05:30:25 -- scripts/common.sh@367 -- # return 0 00:15:22.928 05:30:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:22.928 05:30:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:22.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.928 --rc genhtml_branch_coverage=1 00:15:22.928 --rc genhtml_function_coverage=1 00:15:22.928 --rc genhtml_legend=1 00:15:22.928 --rc geninfo_all_blocks=1 00:15:22.928 --rc geninfo_unexecuted_blocks=1 00:15:22.928 00:15:22.928 ' 00:15:22.928 05:30:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:22.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.928 --rc genhtml_branch_coverage=1 00:15:22.928 --rc genhtml_function_coverage=1 00:15:22.928 --rc genhtml_legend=1 00:15:22.928 --rc geninfo_all_blocks=1 00:15:22.928 --rc geninfo_unexecuted_blocks=1 00:15:22.928 00:15:22.928 ' 00:15:22.928 05:30:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:22.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.928 --rc genhtml_branch_coverage=1 00:15:22.928 --rc genhtml_function_coverage=1 00:15:22.928 --rc genhtml_legend=1 00:15:22.928 --rc geninfo_all_blocks=1 00:15:22.928 --rc geninfo_unexecuted_blocks=1 00:15:22.928 00:15:22.928 ' 00:15:22.928 05:30:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:22.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.928 --rc genhtml_branch_coverage=1 00:15:22.928 --rc genhtml_function_coverage=1 00:15:22.928 --rc genhtml_legend=1 00:15:22.928 --rc geninfo_all_blocks=1 00:15:22.928 --rc geninfo_unexecuted_blocks=1 00:15:22.928 00:15:22.928 ' 00:15:22.929 05:30:25 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:22.929 05:30:25 -- nvmf/common.sh@7 -- # uname -s 00:15:22.929 05:30:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:22.929 05:30:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:22.929 05:30:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:22.929 05:30:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:22.929 05:30:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:22.929 05:30:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:22.929 05:30:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:22.929 05:30:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:22.929 05:30:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:22.929 05:30:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:22.929 05:30:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:22.929 05:30:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:22.929 05:30:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:22.929 05:30:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:22.929 05:30:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:22.929 05:30:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:22.929 05:30:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:22.929 05:30:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:22.929 05:30:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:22.929 05:30:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.929 05:30:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.929 05:30:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.929 05:30:25 -- paths/export.sh@5 -- # export PATH 00:15:22.929 05:30:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.929 05:30:25 -- nvmf/common.sh@46 -- # : 0 00:15:22.929 05:30:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:22.929 05:30:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:22.929 05:30:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:22.929 05:30:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:22.929 05:30:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:22.929 05:30:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:22.929 05:30:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:22.929 05:30:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:22.929 05:30:25 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:22.929 05:30:25 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:22.929 05:30:25 -- target/nvme_cli.sh@14 -- # devs=() 00:15:22.929 05:30:25 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:22.929 05:30:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:22.929 05:30:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:22.929 05:30:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:22.929 05:30:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:22.929 05:30:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:22.929 05:30:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:22.929 05:30:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:22.929 05:30:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:22.929 05:30:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:22.929 05:30:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:22.929 05:30:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:22.929 05:30:25 -- common/autotest_common.sh@10 -- # set +x 00:15:31.071 05:30:32 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:31.071 05:30:32 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:31.071 05:30:32 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:31.071 05:30:32 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:31.071 05:30:32 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:31.071 05:30:32 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:31.071 05:30:32 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:31.071 05:30:32 -- nvmf/common.sh@294 -- # net_devs=() 00:15:31.071 05:30:32 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:31.071 05:30:32 -- nvmf/common.sh@295 -- # e810=() 00:15:31.071 05:30:32 -- nvmf/common.sh@295 -- # local -ga e810 00:15:31.071 05:30:32 -- nvmf/common.sh@296 -- # x722=() 00:15:31.071 05:30:32 -- nvmf/common.sh@296 -- # local -ga x722 00:15:31.071 05:30:32 -- nvmf/common.sh@297 -- # mlx=() 00:15:31.071 05:30:32 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:31.071 05:30:32 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:31.071 05:30:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:31.071 05:30:32 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:31.071 05:30:32 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:31.071 05:30:32 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:31.071 05:30:32 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:31.071 05:30:32 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:31.071 05:30:32 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:31.071 05:30:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:31.071 05:30:32 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:31.071 05:30:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:31.071 05:30:32 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:31.071 05:30:32 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:31.071 05:30:32 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:31.071 05:30:32 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:31.071 05:30:32 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:31.071 05:30:32 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:31.071 05:30:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:31.071 05:30:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:31.071 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:31.071 05:30:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:31.071 05:30:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:31.071 05:30:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:31.071 05:30:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:31.071 05:30:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:31.071 05:30:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:31.071 05:30:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:31.071 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:31.071 05:30:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:31.071 05:30:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:31.071 05:30:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:31.071 05:30:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:31.071 05:30:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:31.071 05:30:32 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:31.071 05:30:32 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:31.071 05:30:32 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:31.071 05:30:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:31.071 05:30:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:31.071 05:30:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:31.071 05:30:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:31.071 05:30:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:31.071 Found net devices under 0000:31:00.0: cvl_0_0 00:15:31.071 05:30:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:31.071 05:30:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:31.071 05:30:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:31.071 05:30:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:31.071 05:30:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:31.071 05:30:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:31.071 Found net devices under 0000:31:00.1: cvl_0_1 00:15:31.071 05:30:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:31.071 05:30:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:31.071 05:30:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:31.071 05:30:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:31.071 05:30:32 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:31.071 05:30:32 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:31.071 05:30:32 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:31.071 05:30:32 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:31.071 05:30:32 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:31.071 05:30:32 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:31.071 05:30:32 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:31.071 05:30:32 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:31.071 05:30:32 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:31.071 05:30:32 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:31.071 05:30:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:31.071 05:30:32 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:31.071 05:30:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:31.071 05:30:32 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:31.071 05:30:32 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:31.071 05:30:32 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:31.071 05:30:32 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:31.071 05:30:32 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:31.071 05:30:32 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:31.071 05:30:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:31.071 05:30:33 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:31.071 05:30:33 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:31.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:31.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:15:31.071 00:15:31.071 --- 10.0.0.2 ping statistics --- 00:15:31.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.071 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:15:31.071 05:30:33 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:31.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:31.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:15:31.071 00:15:31.071 --- 10.0.0.1 ping statistics --- 00:15:31.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.071 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:15:31.071 05:30:33 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:31.071 05:30:33 -- nvmf/common.sh@410 -- # return 0 00:15:31.071 05:30:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:31.071 05:30:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:31.071 05:30:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:31.071 05:30:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:31.071 05:30:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:31.071 05:30:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:31.071 05:30:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:31.071 05:30:33 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:31.071 05:30:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:31.071 05:30:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:31.071 05:30:33 -- common/autotest_common.sh@10 -- # set +x 00:15:31.071 05:30:33 -- nvmf/common.sh@469 -- # nvmfpid=1767334 00:15:31.071 05:30:33 -- nvmf/common.sh@470 -- # waitforlisten 1767334 00:15:31.071 05:30:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:31.071 05:30:33 -- common/autotest_common.sh@829 -- # '[' -z 1767334 ']' 00:15:31.071 05:30:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.071 05:30:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:31.071 05:30:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.071 05:30:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:31.071 05:30:33 -- common/autotest_common.sh@10 -- # set +x 00:15:31.071 [2024-12-07 05:30:33.210943] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:31.071 [2024-12-07 05:30:33.211022] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.071 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.071 [2024-12-07 05:30:33.286939] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:31.072 [2024-12-07 05:30:33.362619] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:31.072 [2024-12-07 05:30:33.362753] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:31.072 [2024-12-07 05:30:33.362764] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:31.072 [2024-12-07 05:30:33.362772] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:31.072 [2024-12-07 05:30:33.362912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:31.072 [2024-12-07 05:30:33.363041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:31.072 [2024-12-07 05:30:33.363134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.072 [2024-12-07 05:30:33.363135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:31.072 05:30:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:31.072 05:30:34 -- common/autotest_common.sh@862 -- # return 0 00:15:31.072 05:30:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:31.072 05:30:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:31.072 05:30:34 -- common/autotest_common.sh@10 -- # set +x 00:15:31.072 05:30:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:31.072 05:30:34 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:31.072 05:30:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.072 05:30:34 -- common/autotest_common.sh@10 -- # set +x 00:15:31.072 [2024-12-07 05:30:34.057315] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:31.072 05:30:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.072 05:30:34 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:31.072 05:30:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.072 05:30:34 -- common/autotest_common.sh@10 -- # set +x 00:15:31.072 Malloc0 00:15:31.072 05:30:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.072 05:30:34 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:31.072 05:30:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.072 05:30:34 -- common/autotest_common.sh@10 -- # set +x 00:15:31.072 Malloc1 00:15:31.072 05:30:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.072 05:30:34 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:31.072 05:30:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.072 05:30:34 -- common/autotest_common.sh@10 -- # set +x 00:15:31.072 05:30:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.072 05:30:34 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:31.072 05:30:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.072 05:30:34 -- common/autotest_common.sh@10 -- # set +x 00:15:31.072 05:30:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.072 05:30:34 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:31.072 05:30:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.072 05:30:34 -- common/autotest_common.sh@10 -- # set +x 00:15:31.072 05:30:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.072 05:30:34 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:31.072 05:30:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.072 05:30:34 -- common/autotest_common.sh@10 -- # set +x 00:15:31.072 [2024-12-07 05:30:34.147418] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:31.072 05:30:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.072 05:30:34 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:31.072 05:30:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.072 05:30:34 -- common/autotest_common.sh@10 -- # set +x 00:15:31.072 05:30:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.072 05:30:34 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:15:31.333 00:15:31.333 Discovery Log Number of Records 2, Generation counter 2 00:15:31.333 =====Discovery Log Entry 0====== 00:15:31.333 trtype: tcp 00:15:31.333 adrfam: ipv4 00:15:31.333 subtype: current discovery subsystem 00:15:31.333 treq: not required 00:15:31.333 portid: 0 00:15:31.333 trsvcid: 4420 00:15:31.333 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:31.333 traddr: 10.0.0.2 00:15:31.333 eflags: explicit discovery connections, duplicate discovery information 00:15:31.333 sectype: none 00:15:31.333 =====Discovery Log Entry 1====== 00:15:31.333 trtype: tcp 00:15:31.333 adrfam: ipv4 00:15:31.333 subtype: nvme subsystem 00:15:31.333 treq: not required 00:15:31.333 portid: 0 00:15:31.333 trsvcid: 4420 00:15:31.333 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:31.333 traddr: 10.0.0.2 00:15:31.333 eflags: none 00:15:31.333 sectype: none 00:15:31.333 05:30:34 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:31.333 05:30:34 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:31.333 05:30:34 -- nvmf/common.sh@510 -- # local dev _ 00:15:31.333 05:30:34 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:31.333 05:30:34 -- nvmf/common.sh@509 -- # nvme list 00:15:31.333 05:30:34 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:31.333 05:30:34 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:31.333 05:30:34 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:31.333 05:30:34 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:31.333 05:30:34 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:31.333 05:30:34 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:32.718 05:30:35 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:32.718 05:30:35 -- common/autotest_common.sh@1187 -- # local i=0 00:15:32.718 05:30:35 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:32.718 05:30:35 -- common/autotest_common.sh@1189 -- # [[ -n 2 ]] 00:15:32.718 05:30:35 -- common/autotest_common.sh@1190 -- # nvme_device_counter=2 00:15:32.718 05:30:35 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:35.351 05:30:37 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:35.351 05:30:37 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:35.351 05:30:37 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:35.351 05:30:37 -- common/autotest_common.sh@1196 -- # nvme_devices=2 00:15:35.351 05:30:37 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:35.351 05:30:37 -- common/autotest_common.sh@1197 -- # return 0 00:15:35.351 05:30:37 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:35.351 05:30:37 -- nvmf/common.sh@510 -- # local dev _ 00:15:35.351 05:30:37 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:35.351 05:30:37 -- nvmf/common.sh@509 -- # nvme list 00:15:35.351 05:30:37 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:35.351 05:30:37 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:35.351 05:30:37 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:35.351 05:30:37 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:35.351 05:30:37 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:35.351 05:30:37 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:15:35.351 05:30:37 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:35.351 05:30:37 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:35.351 05:30:37 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:15:35.351 05:30:37 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:35.351 05:30:37 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:15:35.351 /dev/nvme0n2 ]] 00:15:35.351 05:30:37 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:35.351 05:30:37 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:35.351 05:30:37 -- nvmf/common.sh@510 -- # local dev _ 00:15:35.351 05:30:37 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:35.351 05:30:37 -- nvmf/common.sh@509 -- # nvme list 00:15:35.351 05:30:37 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:35.351 05:30:37 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:35.351 05:30:37 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:35.351 05:30:37 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:35.351 05:30:37 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:35.351 05:30:37 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:15:35.351 05:30:37 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:35.351 05:30:37 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:35.351 05:30:37 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:15:35.351 05:30:37 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:35.351 05:30:37 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:35.351 05:30:37 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:35.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.351 05:30:38 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:35.351 05:30:38 -- common/autotest_common.sh@1208 -- # local i=0 00:15:35.351 05:30:38 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:35.351 05:30:38 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:35.351 05:30:38 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:35.351 05:30:38 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:35.351 05:30:38 -- common/autotest_common.sh@1220 -- # return 0 00:15:35.351 05:30:38 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:35.351 05:30:38 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:35.352 05:30:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.352 05:30:38 -- common/autotest_common.sh@10 -- # set +x 00:15:35.352 05:30:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.352 05:30:38 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:35.352 05:30:38 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:35.352 05:30:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:35.352 05:30:38 -- nvmf/common.sh@116 -- # sync 00:15:35.352 05:30:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:35.352 05:30:38 -- nvmf/common.sh@119 -- # set +e 00:15:35.352 05:30:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:35.352 05:30:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:35.352 rmmod nvme_tcp 00:15:35.352 rmmod nvme_fabrics 00:15:35.352 rmmod nvme_keyring 00:15:35.352 05:30:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:35.352 05:30:38 -- nvmf/common.sh@123 -- # set -e 00:15:35.352 05:30:38 -- nvmf/common.sh@124 -- # return 0 00:15:35.352 05:30:38 -- nvmf/common.sh@477 -- # '[' -n 1767334 ']' 00:15:35.352 05:30:38 -- nvmf/common.sh@478 -- # killprocess 1767334 00:15:35.352 05:30:38 -- common/autotest_common.sh@936 -- # '[' -z 1767334 ']' 00:15:35.352 05:30:38 -- common/autotest_common.sh@940 -- # kill -0 1767334 00:15:35.352 05:30:38 -- common/autotest_common.sh@941 -- # uname 00:15:35.352 05:30:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:35.352 05:30:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1767334 00:15:35.352 05:30:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:35.352 05:30:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:35.352 05:30:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1767334' 00:15:35.352 killing process with pid 1767334 00:15:35.352 05:30:38 -- common/autotest_common.sh@955 -- # kill 1767334 00:15:35.352 05:30:38 -- common/autotest_common.sh@960 -- # wait 1767334 00:15:35.352 05:30:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:35.352 05:30:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:35.352 05:30:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:35.352 05:30:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:35.352 05:30:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:35.352 05:30:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.352 05:30:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:35.352 05:30:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.263 05:30:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:37.263 00:15:37.263 real 0m14.865s 00:15:37.263 user 0m22.366s 00:15:37.263 sys 0m6.076s 00:15:37.263 05:30:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:37.263 05:30:40 -- common/autotest_common.sh@10 -- # set +x 00:15:37.263 ************************************ 00:15:37.263 END TEST nvmf_nvme_cli 00:15:37.263 ************************************ 00:15:37.525 05:30:40 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:15:37.525 05:30:40 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:37.525 05:30:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:37.525 05:30:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:37.525 05:30:40 -- common/autotest_common.sh@10 -- # set +x 00:15:37.525 ************************************ 00:15:37.525 START TEST nvmf_host_management 00:15:37.525 ************************************ 00:15:37.525 05:30:40 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:37.525 * Looking for test storage... 00:15:37.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:37.525 05:30:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:37.525 05:30:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:37.525 05:30:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:37.525 05:30:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:37.525 05:30:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:37.525 05:30:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:37.525 05:30:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:37.525 05:30:40 -- scripts/common.sh@335 -- # IFS=.-: 00:15:37.525 05:30:40 -- scripts/common.sh@335 -- # read -ra ver1 00:15:37.525 05:30:40 -- scripts/common.sh@336 -- # IFS=.-: 00:15:37.525 05:30:40 -- scripts/common.sh@336 -- # read -ra ver2 00:15:37.525 05:30:40 -- scripts/common.sh@337 -- # local 'op=<' 00:15:37.525 05:30:40 -- scripts/common.sh@339 -- # ver1_l=2 00:15:37.525 05:30:40 -- scripts/common.sh@340 -- # ver2_l=1 00:15:37.525 05:30:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:37.525 05:30:40 -- scripts/common.sh@343 -- # case "$op" in 00:15:37.525 05:30:40 -- scripts/common.sh@344 -- # : 1 00:15:37.525 05:30:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:37.525 05:30:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:37.525 05:30:40 -- scripts/common.sh@364 -- # decimal 1 00:15:37.525 05:30:40 -- scripts/common.sh@352 -- # local d=1 00:15:37.525 05:30:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:37.525 05:30:40 -- scripts/common.sh@354 -- # echo 1 00:15:37.525 05:30:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:37.525 05:30:40 -- scripts/common.sh@365 -- # decimal 2 00:15:37.525 05:30:40 -- scripts/common.sh@352 -- # local d=2 00:15:37.525 05:30:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:37.525 05:30:40 -- scripts/common.sh@354 -- # echo 2 00:15:37.525 05:30:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:37.525 05:30:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:37.525 05:30:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:37.525 05:30:40 -- scripts/common.sh@367 -- # return 0 00:15:37.525 05:30:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:37.525 05:30:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:37.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.525 --rc genhtml_branch_coverage=1 00:15:37.525 --rc genhtml_function_coverage=1 00:15:37.525 --rc genhtml_legend=1 00:15:37.525 --rc geninfo_all_blocks=1 00:15:37.525 --rc geninfo_unexecuted_blocks=1 00:15:37.525 00:15:37.525 ' 00:15:37.525 05:30:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:37.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.525 --rc genhtml_branch_coverage=1 00:15:37.525 --rc genhtml_function_coverage=1 00:15:37.525 --rc genhtml_legend=1 00:15:37.525 --rc geninfo_all_blocks=1 00:15:37.525 --rc geninfo_unexecuted_blocks=1 00:15:37.525 00:15:37.525 ' 00:15:37.525 05:30:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:37.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.525 --rc genhtml_branch_coverage=1 00:15:37.525 --rc genhtml_function_coverage=1 00:15:37.525 --rc genhtml_legend=1 00:15:37.525 --rc geninfo_all_blocks=1 00:15:37.525 --rc geninfo_unexecuted_blocks=1 00:15:37.525 00:15:37.525 ' 00:15:37.525 05:30:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:37.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.525 --rc genhtml_branch_coverage=1 00:15:37.525 --rc genhtml_function_coverage=1 00:15:37.525 --rc genhtml_legend=1 00:15:37.525 --rc geninfo_all_blocks=1 00:15:37.525 --rc geninfo_unexecuted_blocks=1 00:15:37.525 00:15:37.525 ' 00:15:37.525 05:30:40 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:37.525 05:30:40 -- nvmf/common.sh@7 -- # uname -s 00:15:37.525 05:30:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:37.525 05:30:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:37.525 05:30:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:37.525 05:30:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:37.525 05:30:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:37.525 05:30:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:37.525 05:30:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:37.525 05:30:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:37.525 05:30:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:37.525 05:30:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:37.525 05:30:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:37.525 05:30:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:37.525 05:30:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:37.525 05:30:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:37.525 05:30:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:37.525 05:30:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:37.525 05:30:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:37.525 05:30:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:37.525 05:30:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:37.525 05:30:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.525 05:30:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.525 05:30:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.525 05:30:40 -- paths/export.sh@5 -- # export PATH 00:15:37.525 05:30:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.525 05:30:40 -- nvmf/common.sh@46 -- # : 0 00:15:37.525 05:30:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:37.525 05:30:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:37.525 05:30:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:37.525 05:30:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:37.525 05:30:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:37.525 05:30:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:37.525 05:30:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:37.525 05:30:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:37.525 05:30:40 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:37.525 05:30:40 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:37.525 05:30:40 -- target/host_management.sh@104 -- # nvmftestinit 00:15:37.525 05:30:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:37.525 05:30:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:37.525 05:30:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:37.525 05:30:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:37.525 05:30:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:37.525 05:30:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.525 05:30:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:37.525 05:30:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.525 05:30:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:37.525 05:30:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:37.525 05:30:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:37.525 05:30:40 -- common/autotest_common.sh@10 -- # set +x 00:15:45.673 05:30:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:45.673 05:30:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:45.673 05:30:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:45.673 05:30:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:45.673 05:30:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:45.673 05:30:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:45.673 05:30:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:45.673 05:30:47 -- nvmf/common.sh@294 -- # net_devs=() 00:15:45.673 05:30:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:45.673 05:30:47 -- nvmf/common.sh@295 -- # e810=() 00:15:45.673 05:30:47 -- nvmf/common.sh@295 -- # local -ga e810 00:15:45.673 05:30:47 -- nvmf/common.sh@296 -- # x722=() 00:15:45.673 05:30:47 -- nvmf/common.sh@296 -- # local -ga x722 00:15:45.673 05:30:47 -- nvmf/common.sh@297 -- # mlx=() 00:15:45.673 05:30:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:45.673 05:30:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:45.673 05:30:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:45.673 05:30:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:45.673 05:30:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:45.673 05:30:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:45.673 05:30:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:45.673 05:30:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:45.673 05:30:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:45.673 05:30:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:45.673 05:30:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:45.673 05:30:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:45.673 05:30:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:45.673 05:30:47 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:45.673 05:30:47 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:45.673 05:30:47 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:45.673 05:30:47 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:45.673 05:30:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:45.673 05:30:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:45.673 05:30:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:45.673 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:45.673 05:30:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:45.673 05:30:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:45.673 05:30:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:45.673 05:30:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:45.673 05:30:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:45.673 05:30:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:45.673 05:30:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:45.673 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:45.673 05:30:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:45.673 05:30:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:45.673 05:30:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:45.673 05:30:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:45.673 05:30:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:45.673 05:30:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:45.673 05:30:47 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:45.673 05:30:47 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:45.673 05:30:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:45.673 05:30:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:45.673 05:30:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:45.673 05:30:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:45.673 05:30:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:45.673 Found net devices under 0000:31:00.0: cvl_0_0 00:15:45.673 05:30:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:45.673 05:30:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:45.673 05:30:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:45.673 05:30:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:45.673 05:30:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:45.673 05:30:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:45.673 Found net devices under 0000:31:00.1: cvl_0_1 00:15:45.673 05:30:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:45.673 05:30:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:45.673 05:30:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:45.673 05:30:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:45.673 05:30:47 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:45.673 05:30:47 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:45.673 05:30:47 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:45.673 05:30:47 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:45.673 05:30:47 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:45.673 05:30:47 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:45.673 05:30:47 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:45.673 05:30:47 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:45.673 05:30:47 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:45.673 05:30:47 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:45.673 05:30:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:45.673 05:30:47 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:45.673 05:30:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:45.673 05:30:47 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:45.673 05:30:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:45.673 05:30:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:45.673 05:30:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:45.673 05:30:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:45.673 05:30:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:45.673 05:30:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:45.673 05:30:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:45.673 05:30:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:45.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:45.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.598 ms 00:15:45.673 00:15:45.673 --- 10.0.0.2 ping statistics --- 00:15:45.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.673 rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms 00:15:45.673 05:30:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:45.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:45.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:15:45.673 00:15:45.673 --- 10.0.0.1 ping statistics --- 00:15:45.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.673 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:15:45.673 05:30:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:45.673 05:30:48 -- nvmf/common.sh@410 -- # return 0 00:15:45.673 05:30:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:45.673 05:30:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:45.673 05:30:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:45.674 05:30:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:45.674 05:30:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:45.674 05:30:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:45.674 05:30:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:45.674 05:30:48 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:15:45.674 05:30:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:45.674 05:30:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:45.674 05:30:48 -- common/autotest_common.sh@10 -- # set +x 00:15:45.674 ************************************ 00:15:45.674 START TEST nvmf_host_management 00:15:45.674 ************************************ 00:15:45.674 05:30:48 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:15:45.674 05:30:48 -- target/host_management.sh@69 -- # starttarget 00:15:45.674 05:30:48 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:15:45.674 05:30:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:45.674 05:30:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:45.674 05:30:48 -- common/autotest_common.sh@10 -- # set +x 00:15:45.674 05:30:48 -- nvmf/common.sh@469 -- # nvmfpid=1772650 00:15:45.674 05:30:48 -- nvmf/common.sh@470 -- # waitforlisten 1772650 00:15:45.674 05:30:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:45.674 05:30:48 -- common/autotest_common.sh@829 -- # '[' -z 1772650 ']' 00:15:45.674 05:30:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.674 05:30:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:45.674 05:30:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.674 05:30:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:45.674 05:30:48 -- common/autotest_common.sh@10 -- # set +x 00:15:45.674 [2024-12-07 05:30:48.222527] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:45.674 [2024-12-07 05:30:48.222621] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:45.674 EAL: No free 2048 kB hugepages reported on node 1 00:15:45.674 [2024-12-07 05:30:48.314317] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:45.674 [2024-12-07 05:30:48.406138] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:45.674 [2024-12-07 05:30:48.406295] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:45.674 [2024-12-07 05:30:48.406305] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:45.674 [2024-12-07 05:30:48.406313] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:45.674 [2024-12-07 05:30:48.406459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:45.674 [2024-12-07 05:30:48.406626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:45.674 [2024-12-07 05:30:48.406790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:45.674 [2024-12-07 05:30:48.406791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:45.934 05:30:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:45.934 05:30:49 -- common/autotest_common.sh@862 -- # return 0 00:15:45.934 05:30:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:45.934 05:30:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:45.934 05:30:49 -- common/autotest_common.sh@10 -- # set +x 00:15:45.934 05:30:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:45.934 05:30:49 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:45.934 05:30:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.934 05:30:49 -- common/autotest_common.sh@10 -- # set +x 00:15:45.934 [2024-12-07 05:30:49.056057] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:45.934 05:30:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.934 05:30:49 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:15:45.934 05:30:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:45.934 05:30:49 -- common/autotest_common.sh@10 -- # set +x 00:15:45.934 05:30:49 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:45.934 05:30:49 -- target/host_management.sh@23 -- # cat 00:15:45.934 05:30:49 -- target/host_management.sh@30 -- # rpc_cmd 00:15:45.934 05:30:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.934 05:30:49 -- common/autotest_common.sh@10 -- # set +x 00:15:45.934 Malloc0 00:15:45.934 [2024-12-07 05:30:49.119605] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:45.934 05:30:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.934 05:30:49 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:15:45.934 05:30:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:45.934 05:30:49 -- common/autotest_common.sh@10 -- # set +x 00:15:45.934 05:30:49 -- target/host_management.sh@73 -- # perfpid=1772926 00:15:46.194 05:30:49 -- target/host_management.sh@74 -- # waitforlisten 1772926 /var/tmp/bdevperf.sock 00:15:46.194 05:30:49 -- common/autotest_common.sh@829 -- # '[' -z 1772926 ']' 00:15:46.194 05:30:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:46.194 05:30:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:46.194 05:30:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:46.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:46.194 05:30:49 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:15:46.194 05:30:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:46.194 05:30:49 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:15:46.194 05:30:49 -- common/autotest_common.sh@10 -- # set +x 00:15:46.194 05:30:49 -- nvmf/common.sh@520 -- # config=() 00:15:46.194 05:30:49 -- nvmf/common.sh@520 -- # local subsystem config 00:15:46.194 05:30:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:46.194 05:30:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:46.194 { 00:15:46.194 "params": { 00:15:46.194 "name": "Nvme$subsystem", 00:15:46.194 "trtype": "$TEST_TRANSPORT", 00:15:46.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:46.194 "adrfam": "ipv4", 00:15:46.194 "trsvcid": "$NVMF_PORT", 00:15:46.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:46.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:46.194 "hdgst": ${hdgst:-false}, 00:15:46.194 "ddgst": ${ddgst:-false} 00:15:46.194 }, 00:15:46.194 "method": "bdev_nvme_attach_controller" 00:15:46.194 } 00:15:46.194 EOF 00:15:46.194 )") 00:15:46.194 05:30:49 -- nvmf/common.sh@542 -- # cat 00:15:46.194 05:30:49 -- nvmf/common.sh@544 -- # jq . 00:15:46.194 05:30:49 -- nvmf/common.sh@545 -- # IFS=, 00:15:46.194 05:30:49 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:46.194 "params": { 00:15:46.194 "name": "Nvme0", 00:15:46.194 "trtype": "tcp", 00:15:46.194 "traddr": "10.0.0.2", 00:15:46.194 "adrfam": "ipv4", 00:15:46.194 "trsvcid": "4420", 00:15:46.194 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:46.194 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:46.194 "hdgst": false, 00:15:46.194 "ddgst": false 00:15:46.194 }, 00:15:46.194 "method": "bdev_nvme_attach_controller" 00:15:46.194 }' 00:15:46.194 [2024-12-07 05:30:49.217020] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:46.194 [2024-12-07 05:30:49.217072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1772926 ] 00:15:46.194 EAL: No free 2048 kB hugepages reported on node 1 00:15:46.194 [2024-12-07 05:30:49.278275] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.194 [2024-12-07 05:30:49.342135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.453 Running I/O for 10 seconds... 00:15:47.026 05:30:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:47.026 05:30:49 -- common/autotest_common.sh@862 -- # return 0 00:15:47.026 05:30:49 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:15:47.026 05:30:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.026 05:30:49 -- common/autotest_common.sh@10 -- # set +x 00:15:47.026 05:30:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.026 05:30:50 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:47.026 05:30:50 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:15:47.026 05:30:50 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:15:47.026 05:30:50 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:15:47.026 05:30:50 -- target/host_management.sh@52 -- # local ret=1 00:15:47.026 05:30:50 -- target/host_management.sh@53 -- # local i 00:15:47.026 05:30:50 -- target/host_management.sh@54 -- # (( i = 10 )) 00:15:47.026 05:30:50 -- target/host_management.sh@54 -- # (( i != 0 )) 00:15:47.026 05:30:50 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:15:47.026 05:30:50 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:15:47.026 05:30:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.026 05:30:50 -- common/autotest_common.sh@10 -- # set +x 00:15:47.026 05:30:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.026 05:30:50 -- target/host_management.sh@55 -- # read_io_count=1691 00:15:47.026 05:30:50 -- target/host_management.sh@58 -- # '[' 1691 -ge 100 ']' 00:15:47.026 05:30:50 -- target/host_management.sh@59 -- # ret=0 00:15:47.026 05:30:50 -- target/host_management.sh@60 -- # break 00:15:47.026 05:30:50 -- target/host_management.sh@64 -- # return 0 00:15:47.026 05:30:50 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:47.026 05:30:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.026 05:30:50 -- common/autotest_common.sh@10 -- # set +x 00:15:47.026 [2024-12-07 05:30:50.063083] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.026 [2024-12-07 05:30:50.063134] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.026 [2024-12-07 05:30:50.063144] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.026 [2024-12-07 05:30:50.063151] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.026 [2024-12-07 05:30:50.063158] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063165] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063173] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063181] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063187] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063195] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063202] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063209] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063216] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063222] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063229] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063236] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063243] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063255] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063263] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063270] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063277] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063284] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063291] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063298] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063305] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063312] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063319] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063326] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063333] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063340] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063347] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063354] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063361] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063368] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063375] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063381] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063389] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063396] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063403] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063409] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063416] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063423] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063429] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063436] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cb50 is same with the state(5) to be set 00:15:47.027 [2024-12-07 05:30:50.063531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.027 [2024-12-07 05:30:50.063572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.027 [2024-12-07 05:30:50.063592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.027 [2024-12-07 05:30:50.063601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.027 [2024-12-07 05:30:50.063612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.027 [2024-12-07 05:30:50.063619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.027 [2024-12-07 05:30:50.063629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.027 [2024-12-07 05:30:50.063636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.027 [2024-12-07 05:30:50.063646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.027 [2024-12-07 05:30:50.063653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.027 [2024-12-07 05:30:50.063663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.027 [2024-12-07 05:30:50.063670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.027 [2024-12-07 05:30:50.063680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.027 [2024-12-07 05:30:50.063688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.027 [2024-12-07 05:30:50.063697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.027 [2024-12-07 05:30:50.063705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.027 [2024-12-07 05:30:50.063714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.027 [2024-12-07 05:30:50.063722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.027 [2024-12-07 05:30:50.063731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.027 [2024-12-07 05:30:50.063739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.027 [2024-12-07 05:30:50.063749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.027 [2024-12-07 05:30:50.063756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.027 [2024-12-07 05:30:50.063765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.027 [2024-12-07 05:30:50.063772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.027 [2024-12-07 05:30:50.063782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.027 [2024-12-07 05:30:50.063791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.027 [2024-12-07 05:30:50.063801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.027 [2024-12-07 05:30:50.063809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.027 [2024-12-07 05:30:50.063818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.027 [2024-12-07 05:30:50.063825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.027 [2024-12-07 05:30:50.063835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.027 [2024-12-07 05:30:50.063842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.027 [2024-12-07 05:30:50.063851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.027 [2024-12-07 05:30:50.063859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.027 [2024-12-07 05:30:50.063869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.027 [2024-12-07 05:30:50.063876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.027 [2024-12-07 05:30:50.063886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.027 [2024-12-07 05:30:50.063893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.027 [2024-12-07 05:30:50.063902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.027 [2024-12-07 05:30:50.063910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.027 [2024-12-07 05:30:50.063919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.027 [2024-12-07 05:30:50.063927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.027 [2024-12-07 05:30:50.063937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.063944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.063953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.063961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.063970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.063977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.063987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.063994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.028 [2024-12-07 05:30:50.064617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.028 [2024-12-07 05:30:50.064625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.029 [2024-12-07 05:30:50.064634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.029 [2024-12-07 05:30:50.064641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.029 [2024-12-07 05:30:50.064651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.029 [2024-12-07 05:30:50.064660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.029 [2024-12-07 05:30:50.064669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.029 [2024-12-07 05:30:50.064676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.029 [2024-12-07 05:30:50.064686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdcff80 is same with the state(5) to be set 00:15:47.029 [2024-12-07 05:30:50.064727] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdcff80 was disconnected and freed. reset controller. 00:15:47.029 [2024-12-07 05:30:50.065921] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:47.029 task offset: 104192 on job bdev=Nvme0n1 fails 00:15:47.029 00:15:47.029 Latency(us) 00:15:47.029 [2024-12-07T04:30:50.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.029 [2024-12-07T04:30:50.269Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:47.029 [2024-12-07T04:30:50.269Z] Job: Nvme0n1 ended in about 0.54 seconds with error 00:15:47.029 Verification LBA range: start 0x0 length 0x400 00:15:47.029 Nvme0n1 : 0.54 3354.74 209.67 118.42 0.00 18122.16 1829.55 22063.79 00:15:47.029 [2024-12-07T04:30:50.269Z] =================================================================================================================== 00:15:47.029 [2024-12-07T04:30:50.269Z] Total : 3354.74 209.67 118.42 0.00 18122.16 1829.55 22063.79 00:15:47.029 05:30:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.029 [2024-12-07 05:30:50.067909] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:47.029 [2024-12-07 05:30:50.067932] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd28f0 (9): Bad file descriptor 00:15:47.029 05:30:50 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:47.029 05:30:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.029 05:30:50 -- common/autotest_common.sh@10 -- # set +x 00:15:47.029 [2024-12-07 05:30:50.072354] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:15:47.029 [2024-12-07 05:30:50.072437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:47.029 [2024-12-07 05:30:50.072458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.029 [2024-12-07 05:30:50.072471] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:15:47.029 [2024-12-07 05:30:50.072479] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:15:47.029 [2024-12-07 05:30:50.072486] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:15:47.029 [2024-12-07 05:30:50.072494] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd28f0 00:15:47.029 [2024-12-07 05:30:50.072512] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd28f0 (9): Bad file descriptor 00:15:47.029 [2024-12-07 05:30:50.072524] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:15:47.029 [2024-12-07 05:30:50.072531] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:15:47.029 [2024-12-07 05:30:50.072540] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:15:47.029 [2024-12-07 05:30:50.072553] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:47.029 05:30:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.029 05:30:50 -- target/host_management.sh@87 -- # sleep 1 00:15:47.970 05:30:51 -- target/host_management.sh@91 -- # kill -9 1772926 00:15:47.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1772926) - No such process 00:15:47.970 05:30:51 -- target/host_management.sh@91 -- # true 00:15:47.970 05:30:51 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:15:47.970 05:30:51 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:15:47.970 05:30:51 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:15:47.970 05:30:51 -- nvmf/common.sh@520 -- # config=() 00:15:47.970 05:30:51 -- nvmf/common.sh@520 -- # local subsystem config 00:15:47.970 05:30:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:47.970 05:30:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:47.970 { 00:15:47.970 "params": { 00:15:47.970 "name": "Nvme$subsystem", 00:15:47.970 "trtype": "$TEST_TRANSPORT", 00:15:47.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:47.970 "adrfam": "ipv4", 00:15:47.970 "trsvcid": "$NVMF_PORT", 00:15:47.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:47.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:47.970 "hdgst": ${hdgst:-false}, 00:15:47.970 "ddgst": ${ddgst:-false} 00:15:47.970 }, 00:15:47.970 "method": "bdev_nvme_attach_controller" 00:15:47.970 } 00:15:47.970 EOF 00:15:47.970 )") 00:15:47.970 05:30:51 -- nvmf/common.sh@542 -- # cat 00:15:47.970 05:30:51 -- nvmf/common.sh@544 -- # jq . 00:15:47.970 05:30:51 -- nvmf/common.sh@545 -- # IFS=, 00:15:47.970 05:30:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:47.970 "params": { 00:15:47.970 "name": "Nvme0", 00:15:47.970 "trtype": "tcp", 00:15:47.970 "traddr": "10.0.0.2", 00:15:47.970 "adrfam": "ipv4", 00:15:47.970 "trsvcid": "4420", 00:15:47.970 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:47.970 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:47.970 "hdgst": false, 00:15:47.970 "ddgst": false 00:15:47.970 }, 00:15:47.970 "method": "bdev_nvme_attach_controller" 00:15:47.970 }' 00:15:47.970 [2024-12-07 05:30:51.131858] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:47.970 [2024-12-07 05:30:51.131911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1773375 ] 00:15:47.970 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.970 [2024-12-07 05:30:51.192557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.230 [2024-12-07 05:30:51.254825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.491 Running I/O for 1 seconds... 00:15:49.429 00:15:49.429 Latency(us) 00:15:49.429 [2024-12-07T04:30:52.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:49.429 [2024-12-07T04:30:52.669Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:49.429 Verification LBA range: start 0x0 length 0x400 00:15:49.429 Nvme0n1 : 1.01 3407.65 212.98 0.00 0.00 18478.27 2334.72 22173.01 00:15:49.429 [2024-12-07T04:30:52.669Z] =================================================================================================================== 00:15:49.429 [2024-12-07T04:30:52.669Z] Total : 3407.65 212.98 0.00 0.00 18478.27 2334.72 22173.01 00:15:49.429 05:30:52 -- target/host_management.sh@101 -- # stoptarget 00:15:49.429 05:30:52 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:15:49.689 05:30:52 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:15:49.689 05:30:52 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:49.689 05:30:52 -- target/host_management.sh@40 -- # nvmftestfini 00:15:49.689 05:30:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:49.689 05:30:52 -- nvmf/common.sh@116 -- # sync 00:15:49.689 05:30:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:49.689 05:30:52 -- nvmf/common.sh@119 -- # set +e 00:15:49.689 05:30:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:49.689 05:30:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:49.689 rmmod nvme_tcp 00:15:49.689 rmmod nvme_fabrics 00:15:49.689 rmmod nvme_keyring 00:15:49.689 05:30:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:49.689 05:30:52 -- nvmf/common.sh@123 -- # set -e 00:15:49.689 05:30:52 -- nvmf/common.sh@124 -- # return 0 00:15:49.689 05:30:52 -- nvmf/common.sh@477 -- # '[' -n 1772650 ']' 00:15:49.689 05:30:52 -- nvmf/common.sh@478 -- # killprocess 1772650 00:15:49.689 05:30:52 -- common/autotest_common.sh@936 -- # '[' -z 1772650 ']' 00:15:49.689 05:30:52 -- common/autotest_common.sh@940 -- # kill -0 1772650 00:15:49.689 05:30:52 -- common/autotest_common.sh@941 -- # uname 00:15:49.689 05:30:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:49.689 05:30:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1772650 00:15:49.689 05:30:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:49.689 05:30:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:49.689 05:30:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1772650' 00:15:49.689 killing process with pid 1772650 00:15:49.689 05:30:52 -- common/autotest_common.sh@955 -- # kill 1772650 00:15:49.689 05:30:52 -- common/autotest_common.sh@960 -- # wait 1772650 00:15:49.949 [2024-12-07 05:30:52.929485] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:15:49.949 05:30:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:49.949 05:30:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:49.949 05:30:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:49.949 05:30:52 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:49.949 05:30:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:49.949 05:30:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.949 05:30:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:49.949 05:30:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.862 05:30:55 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:51.862 00:15:51.862 real 0m6.863s 00:15:51.862 user 0m20.671s 00:15:51.862 sys 0m1.144s 00:15:51.862 05:30:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:51.862 05:30:55 -- common/autotest_common.sh@10 -- # set +x 00:15:51.862 ************************************ 00:15:51.862 END TEST nvmf_host_management 00:15:51.862 ************************************ 00:15:51.862 05:30:55 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:15:51.862 00:15:51.862 real 0m14.517s 00:15:51.862 user 0m22.759s 00:15:51.862 sys 0m6.652s 00:15:51.862 05:30:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:51.862 05:30:55 -- common/autotest_common.sh@10 -- # set +x 00:15:51.862 ************************************ 00:15:51.862 END TEST nvmf_host_management 00:15:51.862 ************************************ 00:15:51.862 05:30:55 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:51.862 05:30:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:51.862 05:30:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:52.122 05:30:55 -- common/autotest_common.sh@10 -- # set +x 00:15:52.122 ************************************ 00:15:52.122 START TEST nvmf_lvol 00:15:52.122 ************************************ 00:15:52.122 05:30:55 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:52.122 * Looking for test storage... 00:15:52.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:52.122 05:30:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:52.122 05:30:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:52.122 05:30:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:52.122 05:30:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:52.122 05:30:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:52.122 05:30:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:52.122 05:30:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:52.122 05:30:55 -- scripts/common.sh@335 -- # IFS=.-: 00:15:52.122 05:30:55 -- scripts/common.sh@335 -- # read -ra ver1 00:15:52.122 05:30:55 -- scripts/common.sh@336 -- # IFS=.-: 00:15:52.122 05:30:55 -- scripts/common.sh@336 -- # read -ra ver2 00:15:52.122 05:30:55 -- scripts/common.sh@337 -- # local 'op=<' 00:15:52.122 05:30:55 -- scripts/common.sh@339 -- # ver1_l=2 00:15:52.122 05:30:55 -- scripts/common.sh@340 -- # ver2_l=1 00:15:52.122 05:30:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:52.122 05:30:55 -- scripts/common.sh@343 -- # case "$op" in 00:15:52.122 05:30:55 -- scripts/common.sh@344 -- # : 1 00:15:52.122 05:30:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:52.122 05:30:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:52.122 05:30:55 -- scripts/common.sh@364 -- # decimal 1 00:15:52.122 05:30:55 -- scripts/common.sh@352 -- # local d=1 00:15:52.122 05:30:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:52.122 05:30:55 -- scripts/common.sh@354 -- # echo 1 00:15:52.122 05:30:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:52.122 05:30:55 -- scripts/common.sh@365 -- # decimal 2 00:15:52.122 05:30:55 -- scripts/common.sh@352 -- # local d=2 00:15:52.122 05:30:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:52.122 05:30:55 -- scripts/common.sh@354 -- # echo 2 00:15:52.122 05:30:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:52.122 05:30:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:52.122 05:30:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:52.122 05:30:55 -- scripts/common.sh@367 -- # return 0 00:15:52.122 05:30:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:52.122 05:30:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:52.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.122 --rc genhtml_branch_coverage=1 00:15:52.122 --rc genhtml_function_coverage=1 00:15:52.122 --rc genhtml_legend=1 00:15:52.122 --rc geninfo_all_blocks=1 00:15:52.122 --rc geninfo_unexecuted_blocks=1 00:15:52.122 00:15:52.122 ' 00:15:52.122 05:30:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:52.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.122 --rc genhtml_branch_coverage=1 00:15:52.122 --rc genhtml_function_coverage=1 00:15:52.122 --rc genhtml_legend=1 00:15:52.122 --rc geninfo_all_blocks=1 00:15:52.122 --rc geninfo_unexecuted_blocks=1 00:15:52.123 00:15:52.123 ' 00:15:52.123 05:30:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:52.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.123 --rc genhtml_branch_coverage=1 00:15:52.123 --rc genhtml_function_coverage=1 00:15:52.123 --rc genhtml_legend=1 00:15:52.123 --rc geninfo_all_blocks=1 00:15:52.123 --rc geninfo_unexecuted_blocks=1 00:15:52.123 00:15:52.123 ' 00:15:52.123 05:30:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:52.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.123 --rc genhtml_branch_coverage=1 00:15:52.123 --rc genhtml_function_coverage=1 00:15:52.123 --rc genhtml_legend=1 00:15:52.123 --rc geninfo_all_blocks=1 00:15:52.123 --rc geninfo_unexecuted_blocks=1 00:15:52.123 00:15:52.123 ' 00:15:52.123 05:30:55 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:52.123 05:30:55 -- nvmf/common.sh@7 -- # uname -s 00:15:52.123 05:30:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:52.123 05:30:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:52.123 05:30:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:52.123 05:30:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:52.123 05:30:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:52.123 05:30:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:52.123 05:30:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:52.123 05:30:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:52.123 05:30:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:52.123 05:30:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:52.123 05:30:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:52.123 05:30:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:52.123 05:30:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:52.123 05:30:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:52.123 05:30:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:52.123 05:30:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:52.123 05:30:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:52.123 05:30:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:52.123 05:30:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:52.123 05:30:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.123 05:30:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.123 05:30:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.123 05:30:55 -- paths/export.sh@5 -- # export PATH 00:15:52.123 05:30:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.123 05:30:55 -- nvmf/common.sh@46 -- # : 0 00:15:52.123 05:30:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:52.123 05:30:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:52.123 05:30:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:52.123 05:30:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:52.123 05:30:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:52.123 05:30:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:52.123 05:30:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:52.123 05:30:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:52.123 05:30:55 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:52.123 05:30:55 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:52.123 05:30:55 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:15:52.123 05:30:55 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:15:52.123 05:30:55 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:52.123 05:30:55 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:15:52.123 05:30:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:52.123 05:30:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:52.123 05:30:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:52.123 05:30:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:52.123 05:30:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:52.123 05:30:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.123 05:30:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:52.123 05:30:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.123 05:30:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:52.123 05:30:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:52.123 05:30:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:52.123 05:30:55 -- common/autotest_common.sh@10 -- # set +x 00:16:00.266 05:31:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:00.266 05:31:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:00.266 05:31:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:00.266 05:31:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:00.266 05:31:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:00.266 05:31:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:00.266 05:31:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:00.266 05:31:02 -- nvmf/common.sh@294 -- # net_devs=() 00:16:00.266 05:31:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:00.266 05:31:02 -- nvmf/common.sh@295 -- # e810=() 00:16:00.266 05:31:02 -- nvmf/common.sh@295 -- # local -ga e810 00:16:00.266 05:31:02 -- nvmf/common.sh@296 -- # x722=() 00:16:00.266 05:31:02 -- nvmf/common.sh@296 -- # local -ga x722 00:16:00.266 05:31:02 -- nvmf/common.sh@297 -- # mlx=() 00:16:00.266 05:31:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:00.266 05:31:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:00.266 05:31:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:00.266 05:31:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:00.266 05:31:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:00.266 05:31:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:00.266 05:31:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:00.266 05:31:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:00.266 05:31:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:00.266 05:31:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:00.266 05:31:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:00.266 05:31:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:00.266 05:31:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:00.266 05:31:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:00.266 05:31:02 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:00.266 05:31:02 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:00.266 05:31:02 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:00.266 05:31:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:00.266 05:31:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:00.266 05:31:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:00.266 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:00.266 05:31:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:00.266 05:31:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:00.266 05:31:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:00.266 05:31:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:00.266 05:31:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:00.266 05:31:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:00.266 05:31:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:00.266 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:00.266 05:31:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:00.266 05:31:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:00.266 05:31:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:00.266 05:31:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:00.266 05:31:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:00.266 05:31:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:00.266 05:31:02 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:00.266 05:31:02 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:00.266 05:31:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:00.266 05:31:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:00.266 05:31:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:00.266 05:31:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:00.266 05:31:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:00.266 Found net devices under 0000:31:00.0: cvl_0_0 00:16:00.266 05:31:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:00.266 05:31:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:00.266 05:31:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:00.266 05:31:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:00.266 05:31:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:00.266 05:31:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:00.266 Found net devices under 0000:31:00.1: cvl_0_1 00:16:00.266 05:31:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:00.266 05:31:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:00.266 05:31:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:00.266 05:31:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:00.266 05:31:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:00.266 05:31:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:00.266 05:31:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:00.266 05:31:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:00.266 05:31:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:00.266 05:31:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:00.266 05:31:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:00.266 05:31:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:00.266 05:31:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:00.266 05:31:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:00.266 05:31:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:00.266 05:31:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:00.266 05:31:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:00.266 05:31:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:00.266 05:31:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:00.266 05:31:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:00.267 05:31:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:00.267 05:31:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:00.267 05:31:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:00.267 05:31:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:00.267 05:31:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:00.267 05:31:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:00.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:00.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:16:00.267 00:16:00.267 --- 10.0.0.2 ping statistics --- 00:16:00.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.267 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:16:00.267 05:31:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:00.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:00.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:16:00.267 00:16:00.267 --- 10.0.0.1 ping statistics --- 00:16:00.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.267 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:16:00.267 05:31:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:00.267 05:31:02 -- nvmf/common.sh@410 -- # return 0 00:16:00.267 05:31:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:00.267 05:31:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:00.267 05:31:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:00.267 05:31:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:00.267 05:31:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:00.267 05:31:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:00.267 05:31:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:00.267 05:31:02 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:00.267 05:31:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:00.267 05:31:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:00.267 05:31:02 -- common/autotest_common.sh@10 -- # set +x 00:16:00.267 05:31:02 -- nvmf/common.sh@469 -- # nvmfpid=1777858 00:16:00.267 05:31:02 -- nvmf/common.sh@470 -- # waitforlisten 1777858 00:16:00.267 05:31:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:00.267 05:31:02 -- common/autotest_common.sh@829 -- # '[' -z 1777858 ']' 00:16:00.267 05:31:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.267 05:31:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:00.267 05:31:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.267 05:31:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:00.267 05:31:02 -- common/autotest_common.sh@10 -- # set +x 00:16:00.267 [2024-12-07 05:31:02.905744] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:00.267 [2024-12-07 05:31:02.905807] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:00.267 EAL: No free 2048 kB hugepages reported on node 1 00:16:00.267 [2024-12-07 05:31:02.979055] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:00.267 [2024-12-07 05:31:03.052036] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:00.267 [2024-12-07 05:31:03.052158] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:00.267 [2024-12-07 05:31:03.052167] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:00.267 [2024-12-07 05:31:03.052174] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:00.267 [2024-12-07 05:31:03.052314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.267 [2024-12-07 05:31:03.052435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:00.267 [2024-12-07 05:31:03.052437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.528 05:31:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:00.528 05:31:03 -- common/autotest_common.sh@862 -- # return 0 00:16:00.528 05:31:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:00.528 05:31:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:00.528 05:31:03 -- common/autotest_common.sh@10 -- # set +x 00:16:00.528 05:31:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:00.528 05:31:03 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:00.789 [2024-12-07 05:31:03.893291] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:00.789 05:31:03 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:01.049 05:31:04 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:01.049 05:31:04 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:01.310 05:31:04 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:01.310 05:31:04 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:01.310 05:31:04 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:01.572 05:31:04 -- target/nvmf_lvol.sh@29 -- # lvs=20b72599-676b-4097-ab39-b27026f0a95f 00:16:01.572 05:31:04 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 20b72599-676b-4097-ab39-b27026f0a95f lvol 20 00:16:01.833 05:31:04 -- target/nvmf_lvol.sh@32 -- # lvol=4c06cf26-7094-4773-b969-bb569a0dc97f 00:16:01.833 05:31:04 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:01.833 05:31:05 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4c06cf26-7094-4773-b969-bb569a0dc97f 00:16:02.092 05:31:05 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:02.352 [2024-12-07 05:31:05.335519] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:02.352 05:31:05 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:02.352 05:31:05 -- target/nvmf_lvol.sh@42 -- # perf_pid=1778533 00:16:02.352 05:31:05 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:02.352 05:31:05 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:02.352 EAL: No free 2048 kB hugepages reported on node 1 00:16:03.735 05:31:06 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 4c06cf26-7094-4773-b969-bb569a0dc97f MY_SNAPSHOT 00:16:03.735 05:31:06 -- target/nvmf_lvol.sh@47 -- # snapshot=854cbd59-c07d-4b2d-953d-3d6e5835caf9 00:16:03.735 05:31:06 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 4c06cf26-7094-4773-b969-bb569a0dc97f 30 00:16:03.735 05:31:06 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 854cbd59-c07d-4b2d-953d-3d6e5835caf9 MY_CLONE 00:16:03.997 05:31:07 -- target/nvmf_lvol.sh@49 -- # clone=065752a4-288f-4fbc-97c2-7a63ffe5aa49 00:16:03.997 05:31:07 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 065752a4-288f-4fbc-97c2-7a63ffe5aa49 00:16:04.257 05:31:07 -- target/nvmf_lvol.sh@53 -- # wait 1778533 00:16:14.248 Initializing NVMe Controllers 00:16:14.248 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:14.248 Controller IO queue size 128, less than required. 00:16:14.248 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:14.248 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:14.248 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:14.248 Initialization complete. Launching workers. 00:16:14.248 ======================================================== 00:16:14.248 Latency(us) 00:16:14.248 Device Information : IOPS MiB/s Average min max 00:16:14.248 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12673.50 49.51 10103.00 1432.74 50825.99 00:16:14.248 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 18300.40 71.49 6994.16 1388.57 51567.24 00:16:14.248 ======================================================== 00:16:14.248 Total : 30973.90 120.99 8266.19 1388.57 51567.24 00:16:14.248 00:16:14.248 05:31:15 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:14.248 05:31:16 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4c06cf26-7094-4773-b969-bb569a0dc97f 00:16:14.248 05:31:16 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 20b72599-676b-4097-ab39-b27026f0a95f 00:16:14.248 05:31:16 -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:14.248 05:31:16 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:14.248 05:31:16 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:14.248 05:31:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:14.248 05:31:16 -- nvmf/common.sh@116 -- # sync 00:16:14.248 05:31:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:14.248 05:31:16 -- nvmf/common.sh@119 -- # set +e 00:16:14.248 05:31:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:14.248 05:31:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:14.248 rmmod nvme_tcp 00:16:14.248 rmmod nvme_fabrics 00:16:14.248 rmmod nvme_keyring 00:16:14.248 05:31:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:14.248 05:31:16 -- nvmf/common.sh@123 -- # set -e 00:16:14.248 05:31:16 -- nvmf/common.sh@124 -- # return 0 00:16:14.248 05:31:16 -- nvmf/common.sh@477 -- # '[' -n 1777858 ']' 00:16:14.248 05:31:16 -- nvmf/common.sh@478 -- # killprocess 1777858 00:16:14.248 05:31:16 -- common/autotest_common.sh@936 -- # '[' -z 1777858 ']' 00:16:14.248 05:31:16 -- common/autotest_common.sh@940 -- # kill -0 1777858 00:16:14.248 05:31:16 -- common/autotest_common.sh@941 -- # uname 00:16:14.248 05:31:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:14.248 05:31:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1777858 00:16:14.248 05:31:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:14.248 05:31:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:14.248 05:31:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1777858' 00:16:14.248 killing process with pid 1777858 00:16:14.248 05:31:16 -- common/autotest_common.sh@955 -- # kill 1777858 00:16:14.248 05:31:16 -- common/autotest_common.sh@960 -- # wait 1777858 00:16:14.248 05:31:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:14.248 05:31:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:14.248 05:31:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:14.248 05:31:16 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:14.248 05:31:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:14.248 05:31:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.248 05:31:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.248 05:31:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.632 05:31:18 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:15.632 00:16:15.632 real 0m23.625s 00:16:15.632 user 1m3.490s 00:16:15.632 sys 0m8.517s 00:16:15.632 05:31:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:15.632 05:31:18 -- common/autotest_common.sh@10 -- # set +x 00:16:15.632 ************************************ 00:16:15.632 END TEST nvmf_lvol 00:16:15.632 ************************************ 00:16:15.632 05:31:18 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:15.632 05:31:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:15.632 05:31:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:15.632 05:31:18 -- common/autotest_common.sh@10 -- # set +x 00:16:15.632 ************************************ 00:16:15.632 START TEST nvmf_lvs_grow 00:16:15.632 ************************************ 00:16:15.632 05:31:18 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:15.893 * Looking for test storage... 00:16:15.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:15.893 05:31:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:15.893 05:31:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:15.893 05:31:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:15.893 05:31:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:15.893 05:31:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:15.893 05:31:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:15.893 05:31:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:15.893 05:31:18 -- scripts/common.sh@335 -- # IFS=.-: 00:16:15.893 05:31:18 -- scripts/common.sh@335 -- # read -ra ver1 00:16:15.893 05:31:18 -- scripts/common.sh@336 -- # IFS=.-: 00:16:15.893 05:31:18 -- scripts/common.sh@336 -- # read -ra ver2 00:16:15.893 05:31:18 -- scripts/common.sh@337 -- # local 'op=<' 00:16:15.893 05:31:18 -- scripts/common.sh@339 -- # ver1_l=2 00:16:15.893 05:31:18 -- scripts/common.sh@340 -- # ver2_l=1 00:16:15.893 05:31:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:15.893 05:31:18 -- scripts/common.sh@343 -- # case "$op" in 00:16:15.893 05:31:18 -- scripts/common.sh@344 -- # : 1 00:16:15.893 05:31:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:15.893 05:31:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:15.893 05:31:18 -- scripts/common.sh@364 -- # decimal 1 00:16:15.893 05:31:18 -- scripts/common.sh@352 -- # local d=1 00:16:15.893 05:31:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:15.893 05:31:18 -- scripts/common.sh@354 -- # echo 1 00:16:15.893 05:31:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:15.893 05:31:18 -- scripts/common.sh@365 -- # decimal 2 00:16:15.893 05:31:18 -- scripts/common.sh@352 -- # local d=2 00:16:15.893 05:31:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:15.893 05:31:18 -- scripts/common.sh@354 -- # echo 2 00:16:15.893 05:31:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:15.893 05:31:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:15.893 05:31:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:15.893 05:31:18 -- scripts/common.sh@367 -- # return 0 00:16:15.893 05:31:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:15.893 05:31:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:15.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.893 --rc genhtml_branch_coverage=1 00:16:15.893 --rc genhtml_function_coverage=1 00:16:15.893 --rc genhtml_legend=1 00:16:15.893 --rc geninfo_all_blocks=1 00:16:15.893 --rc geninfo_unexecuted_blocks=1 00:16:15.893 00:16:15.893 ' 00:16:15.893 05:31:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:15.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.893 --rc genhtml_branch_coverage=1 00:16:15.893 --rc genhtml_function_coverage=1 00:16:15.893 --rc genhtml_legend=1 00:16:15.893 --rc geninfo_all_blocks=1 00:16:15.893 --rc geninfo_unexecuted_blocks=1 00:16:15.893 00:16:15.893 ' 00:16:15.893 05:31:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:15.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.893 --rc genhtml_branch_coverage=1 00:16:15.893 --rc genhtml_function_coverage=1 00:16:15.893 --rc genhtml_legend=1 00:16:15.893 --rc geninfo_all_blocks=1 00:16:15.893 --rc geninfo_unexecuted_blocks=1 00:16:15.893 00:16:15.893 ' 00:16:15.893 05:31:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:15.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.893 --rc genhtml_branch_coverage=1 00:16:15.893 --rc genhtml_function_coverage=1 00:16:15.893 --rc genhtml_legend=1 00:16:15.893 --rc geninfo_all_blocks=1 00:16:15.893 --rc geninfo_unexecuted_blocks=1 00:16:15.893 00:16:15.893 ' 00:16:15.893 05:31:18 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:15.893 05:31:18 -- nvmf/common.sh@7 -- # uname -s 00:16:15.893 05:31:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:15.893 05:31:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:15.893 05:31:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:15.893 05:31:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:15.893 05:31:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:15.893 05:31:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:15.893 05:31:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:15.893 05:31:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:15.893 05:31:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:15.893 05:31:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:15.893 05:31:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:15.893 05:31:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:15.894 05:31:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:15.894 05:31:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:15.894 05:31:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:15.894 05:31:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:15.894 05:31:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:15.894 05:31:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:15.894 05:31:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:15.894 05:31:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.894 05:31:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.894 05:31:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.894 05:31:19 -- paths/export.sh@5 -- # export PATH 00:16:15.894 05:31:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.894 05:31:19 -- nvmf/common.sh@46 -- # : 0 00:16:15.894 05:31:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:15.894 05:31:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:15.894 05:31:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:15.894 05:31:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:15.894 05:31:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:15.894 05:31:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:15.894 05:31:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:15.894 05:31:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:15.894 05:31:19 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:15.894 05:31:19 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:15.894 05:31:19 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:16:15.894 05:31:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:15.894 05:31:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:15.894 05:31:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:15.894 05:31:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:15.894 05:31:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:15.894 05:31:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.894 05:31:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:15.894 05:31:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.894 05:31:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:15.894 05:31:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:15.894 05:31:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:15.894 05:31:19 -- common/autotest_common.sh@10 -- # set +x 00:16:24.036 05:31:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:24.036 05:31:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:24.036 05:31:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:24.036 05:31:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:24.036 05:31:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:24.036 05:31:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:24.036 05:31:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:24.036 05:31:26 -- nvmf/common.sh@294 -- # net_devs=() 00:16:24.036 05:31:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:24.036 05:31:26 -- nvmf/common.sh@295 -- # e810=() 00:16:24.036 05:31:26 -- nvmf/common.sh@295 -- # local -ga e810 00:16:24.036 05:31:26 -- nvmf/common.sh@296 -- # x722=() 00:16:24.036 05:31:26 -- nvmf/common.sh@296 -- # local -ga x722 00:16:24.036 05:31:26 -- nvmf/common.sh@297 -- # mlx=() 00:16:24.036 05:31:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:24.036 05:31:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:24.036 05:31:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:24.036 05:31:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:24.036 05:31:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:24.036 05:31:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:24.036 05:31:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:24.036 05:31:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:24.036 05:31:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:24.036 05:31:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:24.036 05:31:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:24.036 05:31:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:24.036 05:31:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:24.036 05:31:26 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:24.036 05:31:26 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:24.036 05:31:26 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:24.036 05:31:26 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:24.036 05:31:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:24.036 05:31:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:24.036 05:31:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:24.036 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:24.036 05:31:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:24.036 05:31:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:24.036 05:31:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:24.036 05:31:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:24.036 05:31:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:24.036 05:31:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:24.036 05:31:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:24.036 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:24.036 05:31:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:24.036 05:31:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:24.036 05:31:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:24.036 05:31:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:24.036 05:31:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:24.036 05:31:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:24.036 05:31:26 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:24.036 05:31:26 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:24.036 05:31:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:24.036 05:31:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:24.036 05:31:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:24.036 05:31:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:24.036 05:31:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:24.036 Found net devices under 0000:31:00.0: cvl_0_0 00:16:24.036 05:31:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:24.036 05:31:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:24.036 05:31:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:24.036 05:31:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:24.036 05:31:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:24.036 05:31:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:24.036 Found net devices under 0000:31:00.1: cvl_0_1 00:16:24.037 05:31:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:24.037 05:31:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:24.037 05:31:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:24.037 05:31:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:24.037 05:31:26 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:24.037 05:31:26 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:24.037 05:31:26 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:24.037 05:31:26 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:24.037 05:31:26 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:24.037 05:31:26 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:24.037 05:31:26 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:24.037 05:31:26 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:24.037 05:31:26 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:24.037 05:31:26 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:24.037 05:31:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:24.037 05:31:26 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:24.037 05:31:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:24.037 05:31:26 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:24.037 05:31:26 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:24.037 05:31:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:24.037 05:31:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:24.037 05:31:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:24.037 05:31:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:24.037 05:31:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:24.037 05:31:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:24.037 05:31:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:24.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:24.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:16:24.037 00:16:24.037 --- 10.0.0.2 ping statistics --- 00:16:24.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.037 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:16:24.037 05:31:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:24.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:24.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:16:24.037 00:16:24.037 --- 10.0.0.1 ping statistics --- 00:16:24.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.037 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:16:24.037 05:31:26 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:24.037 05:31:26 -- nvmf/common.sh@410 -- # return 0 00:16:24.037 05:31:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:24.037 05:31:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:24.037 05:31:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:24.037 05:31:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:24.037 05:31:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:24.037 05:31:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:24.037 05:31:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:24.037 05:31:26 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:16:24.037 05:31:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:24.037 05:31:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:24.037 05:31:26 -- common/autotest_common.sh@10 -- # set +x 00:16:24.037 05:31:26 -- nvmf/common.sh@469 -- # nvmfpid=1784985 00:16:24.037 05:31:26 -- nvmf/common.sh@470 -- # waitforlisten 1784985 00:16:24.037 05:31:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:24.037 05:31:26 -- common/autotest_common.sh@829 -- # '[' -z 1784985 ']' 00:16:24.037 05:31:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.037 05:31:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:24.037 05:31:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.037 05:31:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:24.037 05:31:26 -- common/autotest_common.sh@10 -- # set +x 00:16:24.037 [2024-12-07 05:31:26.608328] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:24.037 [2024-12-07 05:31:26.608408] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.037 EAL: No free 2048 kB hugepages reported on node 1 00:16:24.037 [2024-12-07 05:31:26.686489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.037 [2024-12-07 05:31:26.758204] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:24.037 [2024-12-07 05:31:26.758328] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:24.037 [2024-12-07 05:31:26.758336] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:24.037 [2024-12-07 05:31:26.758344] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:24.037 [2024-12-07 05:31:26.758368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.298 05:31:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:24.298 05:31:27 -- common/autotest_common.sh@862 -- # return 0 00:16:24.298 05:31:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:24.298 05:31:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:24.298 05:31:27 -- common/autotest_common.sh@10 -- # set +x 00:16:24.298 05:31:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.298 05:31:27 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:24.590 [2024-12-07 05:31:27.569872] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:24.590 05:31:27 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:16:24.590 05:31:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:24.590 05:31:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:24.590 05:31:27 -- common/autotest_common.sh@10 -- # set +x 00:16:24.590 ************************************ 00:16:24.590 START TEST lvs_grow_clean 00:16:24.590 ************************************ 00:16:24.590 05:31:27 -- common/autotest_common.sh@1114 -- # lvs_grow 00:16:24.590 05:31:27 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:24.590 05:31:27 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:24.590 05:31:27 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:24.590 05:31:27 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:24.590 05:31:27 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:24.590 05:31:27 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:24.590 05:31:27 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:24.590 05:31:27 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:24.590 05:31:27 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:24.590 05:31:27 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:24.590 05:31:27 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:24.849 05:31:27 -- target/nvmf_lvs_grow.sh@28 -- # lvs=b7561508-d1db-4fc1-8db3-1affbb9cdcbb 00:16:24.849 05:31:27 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7561508-d1db-4fc1-8db3-1affbb9cdcbb 00:16:24.849 05:31:27 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:25.109 05:31:28 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:25.109 05:31:28 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:25.109 05:31:28 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b7561508-d1db-4fc1-8db3-1affbb9cdcbb lvol 150 00:16:25.109 05:31:28 -- target/nvmf_lvs_grow.sh@33 -- # lvol=6c261171-461d-45ca-acb8-bd70328681e2 00:16:25.109 05:31:28 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:25.109 05:31:28 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:25.369 [2024-12-07 05:31:28.408077] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:25.369 [2024-12-07 05:31:28.408126] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:25.369 true 00:16:25.369 05:31:28 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7561508-d1db-4fc1-8db3-1affbb9cdcbb 00:16:25.369 05:31:28 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:25.369 05:31:28 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:25.369 05:31:28 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:25.629 05:31:28 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6c261171-461d-45ca-acb8-bd70328681e2 00:16:25.888 05:31:28 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:25.888 [2024-12-07 05:31:29.013964] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:25.888 05:31:29 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:26.148 05:31:29 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1785538 00:16:26.148 05:31:29 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:26.148 05:31:29 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:26.148 05:31:29 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1785538 /var/tmp/bdevperf.sock 00:16:26.148 05:31:29 -- common/autotest_common.sh@829 -- # '[' -z 1785538 ']' 00:16:26.148 05:31:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:26.148 05:31:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:26.148 05:31:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:26.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:26.148 05:31:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:26.148 05:31:29 -- common/autotest_common.sh@10 -- # set +x 00:16:26.148 [2024-12-07 05:31:29.219092] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:26.148 [2024-12-07 05:31:29.219156] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1785538 ] 00:16:26.148 EAL: No free 2048 kB hugepages reported on node 1 00:16:26.148 [2024-12-07 05:31:29.300325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.148 [2024-12-07 05:31:29.362784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.088 05:31:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:27.088 05:31:29 -- common/autotest_common.sh@862 -- # return 0 00:16:27.088 05:31:29 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:27.088 Nvme0n1 00:16:27.088 05:31:30 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:27.348 [ 00:16:27.348 { 00:16:27.348 "name": "Nvme0n1", 00:16:27.348 "aliases": [ 00:16:27.348 "6c261171-461d-45ca-acb8-bd70328681e2" 00:16:27.348 ], 00:16:27.348 "product_name": "NVMe disk", 00:16:27.348 "block_size": 4096, 00:16:27.348 "num_blocks": 38912, 00:16:27.348 "uuid": "6c261171-461d-45ca-acb8-bd70328681e2", 00:16:27.348 "assigned_rate_limits": { 00:16:27.348 "rw_ios_per_sec": 0, 00:16:27.348 "rw_mbytes_per_sec": 0, 00:16:27.348 "r_mbytes_per_sec": 0, 00:16:27.348 "w_mbytes_per_sec": 0 00:16:27.348 }, 00:16:27.348 "claimed": false, 00:16:27.348 "zoned": false, 00:16:27.348 "supported_io_types": { 00:16:27.348 "read": true, 00:16:27.348 "write": true, 00:16:27.348 "unmap": true, 00:16:27.348 "write_zeroes": true, 00:16:27.348 "flush": true, 00:16:27.348 "reset": true, 00:16:27.348 "compare": true, 00:16:27.348 "compare_and_write": true, 00:16:27.348 "abort": true, 00:16:27.348 "nvme_admin": true, 00:16:27.348 "nvme_io": true 00:16:27.348 }, 00:16:27.348 "driver_specific": { 00:16:27.348 "nvme": [ 00:16:27.348 { 00:16:27.348 "trid": { 00:16:27.348 "trtype": "TCP", 00:16:27.348 "adrfam": "IPv4", 00:16:27.348 "traddr": "10.0.0.2", 00:16:27.348 "trsvcid": "4420", 00:16:27.348 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:27.348 }, 00:16:27.348 "ctrlr_data": { 00:16:27.348 "cntlid": 1, 00:16:27.348 "vendor_id": "0x8086", 00:16:27.348 "model_number": "SPDK bdev Controller", 00:16:27.348 "serial_number": "SPDK0", 00:16:27.348 "firmware_revision": "24.01.1", 00:16:27.349 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:27.349 "oacs": { 00:16:27.349 "security": 0, 00:16:27.349 "format": 0, 00:16:27.349 "firmware": 0, 00:16:27.349 "ns_manage": 0 00:16:27.349 }, 00:16:27.349 "multi_ctrlr": true, 00:16:27.349 "ana_reporting": false 00:16:27.349 }, 00:16:27.349 "vs": { 00:16:27.349 "nvme_version": "1.3" 00:16:27.349 }, 00:16:27.349 "ns_data": { 00:16:27.349 "id": 1, 00:16:27.349 "can_share": true 00:16:27.349 } 00:16:27.349 } 00:16:27.349 ], 00:16:27.349 "mp_policy": "active_passive" 00:16:27.349 } 00:16:27.349 } 00:16:27.349 ] 00:16:27.349 05:31:30 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1785711 00:16:27.349 05:31:30 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:27.349 05:31:30 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:27.349 Running I/O for 10 seconds... 00:16:28.293 Latency(us) 00:16:28.293 [2024-12-07T04:31:31.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.293 [2024-12-07T04:31:31.533Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:28.293 Nvme0n1 : 1.00 18655.00 72.87 0.00 0.00 0.00 0.00 0.00 00:16:28.293 [2024-12-07T04:31:31.533Z] =================================================================================================================== 00:16:28.293 [2024-12-07T04:31:31.533Z] Total : 18655.00 72.87 0.00 0.00 0.00 0.00 0.00 00:16:28.293 00:16:29.235 05:31:32 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b7561508-d1db-4fc1-8db3-1affbb9cdcbb 00:16:29.235 [2024-12-07T04:31:32.475Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:29.235 Nvme0n1 : 2.00 18758.50 73.28 0.00 0.00 0.00 0.00 0.00 00:16:29.235 [2024-12-07T04:31:32.475Z] =================================================================================================================== 00:16:29.235 [2024-12-07T04:31:32.475Z] Total : 18758.50 73.28 0.00 0.00 0.00 0.00 0.00 00:16:29.235 00:16:29.496 true 00:16:29.496 05:31:32 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7561508-d1db-4fc1-8db3-1affbb9cdcbb 00:16:29.496 05:31:32 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:29.496 05:31:32 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:29.496 05:31:32 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:29.496 05:31:32 -- target/nvmf_lvs_grow.sh@65 -- # wait 1785711 00:16:30.436 [2024-12-07T04:31:33.676Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:30.436 Nvme0n1 : 3.00 18812.67 73.49 0.00 0.00 0.00 0.00 0.00 00:16:30.436 [2024-12-07T04:31:33.676Z] =================================================================================================================== 00:16:30.436 [2024-12-07T04:31:33.676Z] Total : 18812.67 73.49 0.00 0.00 0.00 0.00 0.00 00:16:30.436 00:16:31.375 [2024-12-07T04:31:34.615Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:31.375 Nvme0n1 : 4.00 18844.50 73.61 0.00 0.00 0.00 0.00 0.00 00:16:31.375 [2024-12-07T04:31:34.615Z] =================================================================================================================== 00:16:31.375 [2024-12-07T04:31:34.615Z] Total : 18844.50 73.61 0.00 0.00 0.00 0.00 0.00 00:16:31.375 00:16:32.314 [2024-12-07T04:31:35.554Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:32.314 Nvme0n1 : 5.00 18861.40 73.68 0.00 0.00 0.00 0.00 0.00 00:16:32.314 [2024-12-07T04:31:35.554Z] =================================================================================================================== 00:16:32.315 [2024-12-07T04:31:35.555Z] Total : 18861.40 73.68 0.00 0.00 0.00 0.00 0.00 00:16:32.315 00:16:33.255 [2024-12-07T04:31:36.495Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:33.255 Nvme0n1 : 6.00 18894.00 73.80 0.00 0.00 0.00 0.00 0.00 00:16:33.255 [2024-12-07T04:31:36.495Z] =================================================================================================================== 00:16:33.255 [2024-12-07T04:31:36.495Z] Total : 18894.00 73.80 0.00 0.00 0.00 0.00 0.00 00:16:33.255 00:16:34.643 [2024-12-07T04:31:37.883Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:34.643 Nvme0n1 : 7.00 18908.71 73.86 0.00 0.00 0.00 0.00 0.00 00:16:34.643 [2024-12-07T04:31:37.883Z] =================================================================================================================== 00:16:34.643 [2024-12-07T04:31:37.883Z] Total : 18908.71 73.86 0.00 0.00 0.00 0.00 0.00 00:16:34.643 00:16:35.585 [2024-12-07T04:31:38.825Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:35.585 Nvme0n1 : 8.00 18919.25 73.90 0.00 0.00 0.00 0.00 0.00 00:16:35.585 [2024-12-07T04:31:38.825Z] =================================================================================================================== 00:16:35.585 [2024-12-07T04:31:38.825Z] Total : 18919.25 73.90 0.00 0.00 0.00 0.00 0.00 00:16:35.585 00:16:36.529 [2024-12-07T04:31:39.769Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:36.529 Nvme0n1 : 9.00 18934.89 73.96 0.00 0.00 0.00 0.00 0.00 00:16:36.529 [2024-12-07T04:31:39.769Z] =================================================================================================================== 00:16:36.529 [2024-12-07T04:31:39.769Z] Total : 18934.89 73.96 0.00 0.00 0.00 0.00 0.00 00:16:36.529 00:16:37.270 [2024-12-07T04:31:40.510Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:37.270 Nvme0n1 : 10.00 18939.90 73.98 0.00 0.00 0.00 0.00 0.00 00:16:37.270 [2024-12-07T04:31:40.510Z] =================================================================================================================== 00:16:37.270 [2024-12-07T04:31:40.510Z] Total : 18939.90 73.98 0.00 0.00 0.00 0.00 0.00 00:16:37.270 00:16:37.270 00:16:37.270 Latency(us) 00:16:37.270 [2024-12-07T04:31:40.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.270 [2024-12-07T04:31:40.510Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:37.270 Nvme0n1 : 10.01 18941.47 73.99 0.00 0.00 6753.43 4014.08 12124.16 00:16:37.270 [2024-12-07T04:31:40.510Z] =================================================================================================================== 00:16:37.270 [2024-12-07T04:31:40.510Z] Total : 18941.47 73.99 0.00 0.00 6753.43 4014.08 12124.16 00:16:37.270 0 00:16:37.542 05:31:40 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1785538 00:16:37.543 05:31:40 -- common/autotest_common.sh@936 -- # '[' -z 1785538 ']' 00:16:37.543 05:31:40 -- common/autotest_common.sh@940 -- # kill -0 1785538 00:16:37.543 05:31:40 -- common/autotest_common.sh@941 -- # uname 00:16:37.543 05:31:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:37.543 05:31:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1785538 00:16:37.543 05:31:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:37.543 05:31:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:37.543 05:31:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1785538' 00:16:37.543 killing process with pid 1785538 00:16:37.543 05:31:40 -- common/autotest_common.sh@955 -- # kill 1785538 00:16:37.543 Received shutdown signal, test time was about 10.000000 seconds 00:16:37.543 00:16:37.543 Latency(us) 00:16:37.543 [2024-12-07T04:31:40.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.543 [2024-12-07T04:31:40.783Z] =================================================================================================================== 00:16:37.543 [2024-12-07T04:31:40.783Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:37.543 05:31:40 -- common/autotest_common.sh@960 -- # wait 1785538 00:16:37.543 05:31:40 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:37.804 05:31:40 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7561508-d1db-4fc1-8db3-1affbb9cdcbb 00:16:37.804 05:31:40 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:16:37.804 05:31:41 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:16:37.804 05:31:41 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:16:37.804 05:31:41 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:38.065 [2024-12-07 05:31:41.161535] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:38.065 05:31:41 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7561508-d1db-4fc1-8db3-1affbb9cdcbb 00:16:38.065 05:31:41 -- common/autotest_common.sh@650 -- # local es=0 00:16:38.065 05:31:41 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7561508-d1db-4fc1-8db3-1affbb9cdcbb 00:16:38.065 05:31:41 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:38.065 05:31:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:38.065 05:31:41 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:38.065 05:31:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:38.065 05:31:41 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:38.065 05:31:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:38.065 05:31:41 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:38.065 05:31:41 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:38.065 05:31:41 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7561508-d1db-4fc1-8db3-1affbb9cdcbb 00:16:38.326 request: 00:16:38.326 { 00:16:38.326 "uuid": "b7561508-d1db-4fc1-8db3-1affbb9cdcbb", 00:16:38.326 "method": "bdev_lvol_get_lvstores", 00:16:38.326 "req_id": 1 00:16:38.326 } 00:16:38.326 Got JSON-RPC error response 00:16:38.326 response: 00:16:38.326 { 00:16:38.326 "code": -19, 00:16:38.326 "message": "No such device" 00:16:38.326 } 00:16:38.326 05:31:41 -- common/autotest_common.sh@653 -- # es=1 00:16:38.326 05:31:41 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:38.326 05:31:41 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:38.326 05:31:41 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:38.326 05:31:41 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:38.326 aio_bdev 00:16:38.326 05:31:41 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 6c261171-461d-45ca-acb8-bd70328681e2 00:16:38.326 05:31:41 -- common/autotest_common.sh@897 -- # local bdev_name=6c261171-461d-45ca-acb8-bd70328681e2 00:16:38.326 05:31:41 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:38.326 05:31:41 -- common/autotest_common.sh@899 -- # local i 00:16:38.326 05:31:41 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:38.326 05:31:41 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:38.326 05:31:41 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:38.587 05:31:41 -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6c261171-461d-45ca-acb8-bd70328681e2 -t 2000 00:16:38.848 [ 00:16:38.848 { 00:16:38.848 "name": "6c261171-461d-45ca-acb8-bd70328681e2", 00:16:38.848 "aliases": [ 00:16:38.848 "lvs/lvol" 00:16:38.848 ], 00:16:38.848 "product_name": "Logical Volume", 00:16:38.848 "block_size": 4096, 00:16:38.848 "num_blocks": 38912, 00:16:38.848 "uuid": "6c261171-461d-45ca-acb8-bd70328681e2", 00:16:38.848 "assigned_rate_limits": { 00:16:38.848 "rw_ios_per_sec": 0, 00:16:38.848 "rw_mbytes_per_sec": 0, 00:16:38.848 "r_mbytes_per_sec": 0, 00:16:38.848 "w_mbytes_per_sec": 0 00:16:38.848 }, 00:16:38.848 "claimed": false, 00:16:38.848 "zoned": false, 00:16:38.848 "supported_io_types": { 00:16:38.848 "read": true, 00:16:38.848 "write": true, 00:16:38.848 "unmap": true, 00:16:38.848 "write_zeroes": true, 00:16:38.848 "flush": false, 00:16:38.848 "reset": true, 00:16:38.848 "compare": false, 00:16:38.848 "compare_and_write": false, 00:16:38.848 "abort": false, 00:16:38.848 "nvme_admin": false, 00:16:38.848 "nvme_io": false 00:16:38.848 }, 00:16:38.848 "driver_specific": { 00:16:38.848 "lvol": { 00:16:38.848 "lvol_store_uuid": "b7561508-d1db-4fc1-8db3-1affbb9cdcbb", 00:16:38.848 "base_bdev": "aio_bdev", 00:16:38.848 "thin_provision": false, 00:16:38.848 "snapshot": false, 00:16:38.848 "clone": false, 00:16:38.848 "esnap_clone": false 00:16:38.848 } 00:16:38.848 } 00:16:38.848 } 00:16:38.848 ] 00:16:38.848 05:31:41 -- common/autotest_common.sh@905 -- # return 0 00:16:38.848 05:31:41 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7561508-d1db-4fc1-8db3-1affbb9cdcbb 00:16:38.848 05:31:41 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:16:38.848 05:31:42 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:16:38.848 05:31:42 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7561508-d1db-4fc1-8db3-1affbb9cdcbb 00:16:38.848 05:31:42 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:16:39.109 05:31:42 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:16:39.109 05:31:42 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6c261171-461d-45ca-acb8-bd70328681e2 00:16:39.109 05:31:42 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b7561508-d1db-4fc1-8db3-1affbb9cdcbb 00:16:39.370 05:31:42 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:39.632 05:31:42 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:39.632 00:16:39.632 real 0m15.074s 00:16:39.632 user 0m14.857s 00:16:39.632 sys 0m1.213s 00:16:39.632 05:31:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:39.632 05:31:42 -- common/autotest_common.sh@10 -- # set +x 00:16:39.632 ************************************ 00:16:39.632 END TEST lvs_grow_clean 00:16:39.632 ************************************ 00:16:39.632 05:31:42 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:16:39.632 05:31:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:39.632 05:31:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:39.632 05:31:42 -- common/autotest_common.sh@10 -- # set +x 00:16:39.632 ************************************ 00:16:39.632 START TEST lvs_grow_dirty 00:16:39.632 ************************************ 00:16:39.632 05:31:42 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:16:39.632 05:31:42 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:39.632 05:31:42 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:39.632 05:31:42 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:39.632 05:31:42 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:39.632 05:31:42 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:39.632 05:31:42 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:39.632 05:31:42 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:39.632 05:31:42 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:39.632 05:31:42 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:39.893 05:31:42 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:39.893 05:31:42 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:39.893 05:31:43 -- target/nvmf_lvs_grow.sh@28 -- # lvs=18adc1b1-92f1-4aa0-a29f-0cbb64d08d3d 00:16:39.893 05:31:43 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18adc1b1-92f1-4aa0-a29f-0cbb64d08d3d 00:16:39.893 05:31:43 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:40.154 05:31:43 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:40.154 05:31:43 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:40.154 05:31:43 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 18adc1b1-92f1-4aa0-a29f-0cbb64d08d3d lvol 150 00:16:40.416 05:31:43 -- target/nvmf_lvs_grow.sh@33 -- # lvol=6937eae1-0cf1-4f7b-82d3-6737d7940e11 00:16:40.416 05:31:43 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:40.416 05:31:43 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:40.416 [2024-12-07 05:31:43.562133] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:40.416 [2024-12-07 05:31:43.562186] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:40.416 true 00:16:40.416 05:31:43 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18adc1b1-92f1-4aa0-a29f-0cbb64d08d3d 00:16:40.416 05:31:43 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:40.676 05:31:43 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:40.676 05:31:43 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:40.676 05:31:43 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6937eae1-0cf1-4f7b-82d3-6737d7940e11 00:16:40.937 05:31:44 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:41.199 05:31:44 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:41.199 05:31:44 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1788495 00:16:41.199 05:31:44 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:41.199 05:31:44 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:41.199 05:31:44 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1788495 /var/tmp/bdevperf.sock 00:16:41.199 05:31:44 -- common/autotest_common.sh@829 -- # '[' -z 1788495 ']' 00:16:41.199 05:31:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:41.199 05:31:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:41.199 05:31:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:41.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:41.199 05:31:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:41.199 05:31:44 -- common/autotest_common.sh@10 -- # set +x 00:16:41.199 [2024-12-07 05:31:44.374110] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:41.199 [2024-12-07 05:31:44.374162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1788495 ] 00:16:41.199 EAL: No free 2048 kB hugepages reported on node 1 00:16:41.460 [2024-12-07 05:31:44.451248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.460 [2024-12-07 05:31:44.503703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:42.031 05:31:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:42.031 05:31:45 -- common/autotest_common.sh@862 -- # return 0 00:16:42.031 05:31:45 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:42.290 Nvme0n1 00:16:42.290 05:31:45 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:42.549 [ 00:16:42.549 { 00:16:42.549 "name": "Nvme0n1", 00:16:42.549 "aliases": [ 00:16:42.549 "6937eae1-0cf1-4f7b-82d3-6737d7940e11" 00:16:42.549 ], 00:16:42.549 "product_name": "NVMe disk", 00:16:42.549 "block_size": 4096, 00:16:42.549 "num_blocks": 38912, 00:16:42.549 "uuid": "6937eae1-0cf1-4f7b-82d3-6737d7940e11", 00:16:42.549 "assigned_rate_limits": { 00:16:42.549 "rw_ios_per_sec": 0, 00:16:42.549 "rw_mbytes_per_sec": 0, 00:16:42.549 "r_mbytes_per_sec": 0, 00:16:42.549 "w_mbytes_per_sec": 0 00:16:42.549 }, 00:16:42.549 "claimed": false, 00:16:42.549 "zoned": false, 00:16:42.549 "supported_io_types": { 00:16:42.549 "read": true, 00:16:42.549 "write": true, 00:16:42.549 "unmap": true, 00:16:42.549 "write_zeroes": true, 00:16:42.549 "flush": true, 00:16:42.549 "reset": true, 00:16:42.549 "compare": true, 00:16:42.549 "compare_and_write": true, 00:16:42.549 "abort": true, 00:16:42.549 "nvme_admin": true, 00:16:42.549 "nvme_io": true 00:16:42.549 }, 00:16:42.549 "driver_specific": { 00:16:42.549 "nvme": [ 00:16:42.549 { 00:16:42.549 "trid": { 00:16:42.549 "trtype": "TCP", 00:16:42.549 "adrfam": "IPv4", 00:16:42.549 "traddr": "10.0.0.2", 00:16:42.549 "trsvcid": "4420", 00:16:42.549 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:42.549 }, 00:16:42.549 "ctrlr_data": { 00:16:42.549 "cntlid": 1, 00:16:42.549 "vendor_id": "0x8086", 00:16:42.549 "model_number": "SPDK bdev Controller", 00:16:42.549 "serial_number": "SPDK0", 00:16:42.549 "firmware_revision": "24.01.1", 00:16:42.549 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:42.549 "oacs": { 00:16:42.549 "security": 0, 00:16:42.549 "format": 0, 00:16:42.549 "firmware": 0, 00:16:42.549 "ns_manage": 0 00:16:42.549 }, 00:16:42.549 "multi_ctrlr": true, 00:16:42.549 "ana_reporting": false 00:16:42.549 }, 00:16:42.549 "vs": { 00:16:42.549 "nvme_version": "1.3" 00:16:42.549 }, 00:16:42.549 "ns_data": { 00:16:42.549 "id": 1, 00:16:42.549 "can_share": true 00:16:42.549 } 00:16:42.549 } 00:16:42.549 ], 00:16:42.549 "mp_policy": "active_passive" 00:16:42.549 } 00:16:42.549 } 00:16:42.549 ] 00:16:42.549 05:31:45 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1788827 00:16:42.549 05:31:45 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:42.549 05:31:45 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:42.549 Running I/O for 10 seconds... 00:16:43.487 Latency(us) 00:16:43.487 [2024-12-07T04:31:46.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.487 [2024-12-07T04:31:46.727Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:43.487 Nvme0n1 : 1.00 18695.00 73.03 0.00 0.00 0.00 0.00 0.00 00:16:43.487 [2024-12-07T04:31:46.727Z] =================================================================================================================== 00:16:43.487 [2024-12-07T04:31:46.727Z] Total : 18695.00 73.03 0.00 0.00 0.00 0.00 0.00 00:16:43.487 00:16:44.424 05:31:47 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 18adc1b1-92f1-4aa0-a29f-0cbb64d08d3d 00:16:44.685 [2024-12-07T04:31:47.925Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:44.685 Nvme0n1 : 2.00 18754.50 73.26 0.00 0.00 0.00 0.00 0.00 00:16:44.685 [2024-12-07T04:31:47.925Z] =================================================================================================================== 00:16:44.685 [2024-12-07T04:31:47.925Z] Total : 18754.50 73.26 0.00 0.00 0.00 0.00 0.00 00:16:44.685 00:16:44.685 true 00:16:44.685 05:31:47 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18adc1b1-92f1-4aa0-a29f-0cbb64d08d3d 00:16:44.685 05:31:47 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:44.946 05:31:47 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:44.946 05:31:47 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:44.946 05:31:47 -- target/nvmf_lvs_grow.sh@65 -- # wait 1788827 00:16:45.516 [2024-12-07T04:31:48.756Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:45.516 Nvme0n1 : 3.00 18774.00 73.34 0.00 0.00 0.00 0.00 0.00 00:16:45.516 [2024-12-07T04:31:48.756Z] =================================================================================================================== 00:16:45.516 [2024-12-07T04:31:48.756Z] Total : 18774.00 73.34 0.00 0.00 0.00 0.00 0.00 00:16:45.516 00:16:46.898 [2024-12-07T04:31:50.138Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:46.898 Nvme0n1 : 4.00 18816.25 73.50 0.00 0.00 0.00 0.00 0.00 00:16:46.898 [2024-12-07T04:31:50.138Z] =================================================================================================================== 00:16:46.898 [2024-12-07T04:31:50.138Z] Total : 18816.25 73.50 0.00 0.00 0.00 0.00 0.00 00:16:46.898 00:16:47.839 [2024-12-07T04:31:51.079Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:47.839 Nvme0n1 : 5.00 18828.80 73.55 0.00 0.00 0.00 0.00 0.00 00:16:47.839 [2024-12-07T04:31:51.079Z] =================================================================================================================== 00:16:47.839 [2024-12-07T04:31:51.079Z] Total : 18828.80 73.55 0.00 0.00 0.00 0.00 0.00 00:16:47.839 00:16:48.779 [2024-12-07T04:31:52.019Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:48.779 Nvme0n1 : 6.00 18847.67 73.62 0.00 0.00 0.00 0.00 0.00 00:16:48.779 [2024-12-07T04:31:52.019Z] =================================================================================================================== 00:16:48.779 [2024-12-07T04:31:52.019Z] Total : 18847.67 73.62 0.00 0.00 0.00 0.00 0.00 00:16:48.779 00:16:49.718 [2024-12-07T04:31:52.958Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:49.718 Nvme0n1 : 7.00 18861.29 73.68 0.00 0.00 0.00 0.00 0.00 00:16:49.718 [2024-12-07T04:31:52.958Z] =================================================================================================================== 00:16:49.718 [2024-12-07T04:31:52.958Z] Total : 18861.29 73.68 0.00 0.00 0.00 0.00 0.00 00:16:49.718 00:16:50.656 [2024-12-07T04:31:53.896Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:50.656 Nvme0n1 : 8.00 18871.75 73.72 0.00 0.00 0.00 0.00 0.00 00:16:50.656 [2024-12-07T04:31:53.896Z] =================================================================================================================== 00:16:50.656 [2024-12-07T04:31:53.896Z] Total : 18871.75 73.72 0.00 0.00 0.00 0.00 0.00 00:16:50.656 00:16:51.597 [2024-12-07T04:31:54.837Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:51.597 Nvme0n1 : 9.00 18879.56 73.75 0.00 0.00 0.00 0.00 0.00 00:16:51.597 [2024-12-07T04:31:54.837Z] =================================================================================================================== 00:16:51.597 [2024-12-07T04:31:54.837Z] Total : 18879.56 73.75 0.00 0.00 0.00 0.00 0.00 00:16:51.597 00:16:52.534 [2024-12-07T04:31:55.774Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:52.534 Nvme0n1 : 10.00 18891.90 73.80 0.00 0.00 0.00 0.00 0.00 00:16:52.534 [2024-12-07T04:31:55.774Z] =================================================================================================================== 00:16:52.534 [2024-12-07T04:31:55.774Z] Total : 18891.90 73.80 0.00 0.00 0.00 0.00 0.00 00:16:52.534 00:16:52.534 00:16:52.534 Latency(us) 00:16:52.534 [2024-12-07T04:31:55.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:52.534 [2024-12-07T04:31:55.774Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:52.534 Nvme0n1 : 10.01 18891.90 73.80 0.00 0.00 6771.69 1508.69 10048.85 00:16:52.534 [2024-12-07T04:31:55.774Z] =================================================================================================================== 00:16:52.534 [2024-12-07T04:31:55.774Z] Total : 18891.90 73.80 0.00 0.00 6771.69 1508.69 10048.85 00:16:52.534 0 00:16:52.534 05:31:55 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1788495 00:16:52.534 05:31:55 -- common/autotest_common.sh@936 -- # '[' -z 1788495 ']' 00:16:52.534 05:31:55 -- common/autotest_common.sh@940 -- # kill -0 1788495 00:16:52.534 05:31:55 -- common/autotest_common.sh@941 -- # uname 00:16:52.793 05:31:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:52.793 05:31:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1788495 00:16:52.793 05:31:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:52.793 05:31:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:52.793 05:31:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1788495' 00:16:52.793 killing process with pid 1788495 00:16:52.793 05:31:55 -- common/autotest_common.sh@955 -- # kill 1788495 00:16:52.793 Received shutdown signal, test time was about 10.000000 seconds 00:16:52.793 00:16:52.793 Latency(us) 00:16:52.793 [2024-12-07T04:31:56.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:52.793 [2024-12-07T04:31:56.033Z] =================================================================================================================== 00:16:52.793 [2024-12-07T04:31:56.033Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:52.793 05:31:55 -- common/autotest_common.sh@960 -- # wait 1788495 00:16:52.793 05:31:55 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:53.052 05:31:56 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18adc1b1-92f1-4aa0-a29f-0cbb64d08d3d 00:16:53.052 05:31:56 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:16:53.052 05:31:56 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:16:53.052 05:31:56 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:16:53.052 05:31:56 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 1784985 00:16:53.052 05:31:56 -- target/nvmf_lvs_grow.sh@74 -- # wait 1784985 00:16:53.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 1784985 Killed "${NVMF_APP[@]}" "$@" 00:16:53.312 05:31:56 -- target/nvmf_lvs_grow.sh@74 -- # true 00:16:53.312 05:31:56 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:16:53.312 05:31:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:53.312 05:31:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:53.312 05:31:56 -- common/autotest_common.sh@10 -- # set +x 00:16:53.312 05:31:56 -- nvmf/common.sh@469 -- # nvmfpid=1790874 00:16:53.312 05:31:56 -- nvmf/common.sh@470 -- # waitforlisten 1790874 00:16:53.312 05:31:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:53.312 05:31:56 -- common/autotest_common.sh@829 -- # '[' -z 1790874 ']' 00:16:53.312 05:31:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.312 05:31:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:53.312 05:31:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.312 05:31:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:53.312 05:31:56 -- common/autotest_common.sh@10 -- # set +x 00:16:53.312 [2024-12-07 05:31:56.374020] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:53.312 [2024-12-07 05:31:56.374076] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:53.312 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.312 [2024-12-07 05:31:56.443176] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.312 [2024-12-07 05:31:56.507083] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:53.312 [2024-12-07 05:31:56.507203] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:53.312 [2024-12-07 05:31:56.507211] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:53.312 [2024-12-07 05:31:56.507220] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:53.312 [2024-12-07 05:31:56.507237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.250 05:31:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:54.250 05:31:57 -- common/autotest_common.sh@862 -- # return 0 00:16:54.250 05:31:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:54.250 05:31:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:54.250 05:31:57 -- common/autotest_common.sh@10 -- # set +x 00:16:54.250 05:31:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:54.250 05:31:57 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:54.250 [2024-12-07 05:31:57.328378] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:16:54.250 [2024-12-07 05:31:57.328464] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:16:54.250 [2024-12-07 05:31:57.328494] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:16:54.250 05:31:57 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:16:54.250 05:31:57 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 6937eae1-0cf1-4f7b-82d3-6737d7940e11 00:16:54.250 05:31:57 -- common/autotest_common.sh@897 -- # local bdev_name=6937eae1-0cf1-4f7b-82d3-6737d7940e11 00:16:54.250 05:31:57 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:54.250 05:31:57 -- common/autotest_common.sh@899 -- # local i 00:16:54.250 05:31:57 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:54.250 05:31:57 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:54.250 05:31:57 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:54.510 05:31:57 -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6937eae1-0cf1-4f7b-82d3-6737d7940e11 -t 2000 00:16:54.510 [ 00:16:54.510 { 00:16:54.510 "name": "6937eae1-0cf1-4f7b-82d3-6737d7940e11", 00:16:54.510 "aliases": [ 00:16:54.510 "lvs/lvol" 00:16:54.510 ], 00:16:54.510 "product_name": "Logical Volume", 00:16:54.510 "block_size": 4096, 00:16:54.510 "num_blocks": 38912, 00:16:54.510 "uuid": "6937eae1-0cf1-4f7b-82d3-6737d7940e11", 00:16:54.510 "assigned_rate_limits": { 00:16:54.510 "rw_ios_per_sec": 0, 00:16:54.510 "rw_mbytes_per_sec": 0, 00:16:54.510 "r_mbytes_per_sec": 0, 00:16:54.510 "w_mbytes_per_sec": 0 00:16:54.510 }, 00:16:54.510 "claimed": false, 00:16:54.510 "zoned": false, 00:16:54.510 "supported_io_types": { 00:16:54.510 "read": true, 00:16:54.510 "write": true, 00:16:54.510 "unmap": true, 00:16:54.510 "write_zeroes": true, 00:16:54.510 "flush": false, 00:16:54.510 "reset": true, 00:16:54.510 "compare": false, 00:16:54.510 "compare_and_write": false, 00:16:54.510 "abort": false, 00:16:54.510 "nvme_admin": false, 00:16:54.510 "nvme_io": false 00:16:54.510 }, 00:16:54.510 "driver_specific": { 00:16:54.510 "lvol": { 00:16:54.510 "lvol_store_uuid": "18adc1b1-92f1-4aa0-a29f-0cbb64d08d3d", 00:16:54.510 "base_bdev": "aio_bdev", 00:16:54.510 "thin_provision": false, 00:16:54.510 "snapshot": false, 00:16:54.510 "clone": false, 00:16:54.510 "esnap_clone": false 00:16:54.510 } 00:16:54.510 } 00:16:54.510 } 00:16:54.510 ] 00:16:54.510 05:31:57 -- common/autotest_common.sh@905 -- # return 0 00:16:54.510 05:31:57 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18adc1b1-92f1-4aa0-a29f-0cbb64d08d3d 00:16:54.510 05:31:57 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:16:54.770 05:31:57 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:16:54.770 05:31:57 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18adc1b1-92f1-4aa0-a29f-0cbb64d08d3d 00:16:54.770 05:31:57 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:16:54.770 05:31:57 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:16:54.770 05:31:57 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:55.030 [2024-12-07 05:31:58.132408] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:55.030 05:31:58 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18adc1b1-92f1-4aa0-a29f-0cbb64d08d3d 00:16:55.030 05:31:58 -- common/autotest_common.sh@650 -- # local es=0 00:16:55.030 05:31:58 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18adc1b1-92f1-4aa0-a29f-0cbb64d08d3d 00:16:55.031 05:31:58 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:55.031 05:31:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:55.031 05:31:58 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:55.031 05:31:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:55.031 05:31:58 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:55.031 05:31:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:55.031 05:31:58 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:55.031 05:31:58 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:55.031 05:31:58 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18adc1b1-92f1-4aa0-a29f-0cbb64d08d3d 00:16:55.291 request: 00:16:55.291 { 00:16:55.291 "uuid": "18adc1b1-92f1-4aa0-a29f-0cbb64d08d3d", 00:16:55.291 "method": "bdev_lvol_get_lvstores", 00:16:55.291 "req_id": 1 00:16:55.291 } 00:16:55.291 Got JSON-RPC error response 00:16:55.291 response: 00:16:55.291 { 00:16:55.291 "code": -19, 00:16:55.291 "message": "No such device" 00:16:55.291 } 00:16:55.291 05:31:58 -- common/autotest_common.sh@653 -- # es=1 00:16:55.291 05:31:58 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:55.291 05:31:58 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:55.291 05:31:58 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:55.291 05:31:58 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:55.291 aio_bdev 00:16:55.291 05:31:58 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 6937eae1-0cf1-4f7b-82d3-6737d7940e11 00:16:55.291 05:31:58 -- common/autotest_common.sh@897 -- # local bdev_name=6937eae1-0cf1-4f7b-82d3-6737d7940e11 00:16:55.291 05:31:58 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:55.291 05:31:58 -- common/autotest_common.sh@899 -- # local i 00:16:55.291 05:31:58 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:55.291 05:31:58 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:55.291 05:31:58 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:55.551 05:31:58 -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6937eae1-0cf1-4f7b-82d3-6737d7940e11 -t 2000 00:16:55.811 [ 00:16:55.811 { 00:16:55.811 "name": "6937eae1-0cf1-4f7b-82d3-6737d7940e11", 00:16:55.811 "aliases": [ 00:16:55.811 "lvs/lvol" 00:16:55.811 ], 00:16:55.811 "product_name": "Logical Volume", 00:16:55.811 "block_size": 4096, 00:16:55.811 "num_blocks": 38912, 00:16:55.811 "uuid": "6937eae1-0cf1-4f7b-82d3-6737d7940e11", 00:16:55.811 "assigned_rate_limits": { 00:16:55.811 "rw_ios_per_sec": 0, 00:16:55.811 "rw_mbytes_per_sec": 0, 00:16:55.811 "r_mbytes_per_sec": 0, 00:16:55.811 "w_mbytes_per_sec": 0 00:16:55.811 }, 00:16:55.811 "claimed": false, 00:16:55.811 "zoned": false, 00:16:55.811 "supported_io_types": { 00:16:55.811 "read": true, 00:16:55.811 "write": true, 00:16:55.811 "unmap": true, 00:16:55.811 "write_zeroes": true, 00:16:55.811 "flush": false, 00:16:55.811 "reset": true, 00:16:55.811 "compare": false, 00:16:55.811 "compare_and_write": false, 00:16:55.811 "abort": false, 00:16:55.811 "nvme_admin": false, 00:16:55.811 "nvme_io": false 00:16:55.811 }, 00:16:55.811 "driver_specific": { 00:16:55.811 "lvol": { 00:16:55.811 "lvol_store_uuid": "18adc1b1-92f1-4aa0-a29f-0cbb64d08d3d", 00:16:55.811 "base_bdev": "aio_bdev", 00:16:55.811 "thin_provision": false, 00:16:55.811 "snapshot": false, 00:16:55.811 "clone": false, 00:16:55.811 "esnap_clone": false 00:16:55.811 } 00:16:55.811 } 00:16:55.811 } 00:16:55.811 ] 00:16:55.811 05:31:58 -- common/autotest_common.sh@905 -- # return 0 00:16:55.811 05:31:58 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18adc1b1-92f1-4aa0-a29f-0cbb64d08d3d 00:16:55.811 05:31:58 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:16:55.811 05:31:58 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:16:55.812 05:31:58 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18adc1b1-92f1-4aa0-a29f-0cbb64d08d3d 00:16:55.812 05:31:58 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:16:56.072 05:31:59 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:16:56.072 05:31:59 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6937eae1-0cf1-4f7b-82d3-6737d7940e11 00:16:56.073 05:31:59 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 18adc1b1-92f1-4aa0-a29f-0cbb64d08d3d 00:16:56.333 05:31:59 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:56.594 05:31:59 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:56.594 00:16:56.594 real 0m16.926s 00:16:56.594 user 0m44.112s 00:16:56.594 sys 0m2.771s 00:16:56.594 05:31:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:56.594 05:31:59 -- common/autotest_common.sh@10 -- # set +x 00:16:56.594 ************************************ 00:16:56.594 END TEST lvs_grow_dirty 00:16:56.594 ************************************ 00:16:56.594 05:31:59 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:16:56.594 05:31:59 -- common/autotest_common.sh@806 -- # type=--id 00:16:56.594 05:31:59 -- common/autotest_common.sh@807 -- # id=0 00:16:56.594 05:31:59 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:16:56.594 05:31:59 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:56.594 05:31:59 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:16:56.594 05:31:59 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:16:56.594 05:31:59 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:16:56.594 05:31:59 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:56.594 nvmf_trace.0 00:16:56.594 05:31:59 -- common/autotest_common.sh@821 -- # return 0 00:16:56.594 05:31:59 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:16:56.594 05:31:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:56.594 05:31:59 -- nvmf/common.sh@116 -- # sync 00:16:56.594 05:31:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:56.594 05:31:59 -- nvmf/common.sh@119 -- # set +e 00:16:56.594 05:31:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:56.594 05:31:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:56.594 rmmod nvme_tcp 00:16:56.594 rmmod nvme_fabrics 00:16:56.594 rmmod nvme_keyring 00:16:56.594 05:31:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:56.594 05:31:59 -- nvmf/common.sh@123 -- # set -e 00:16:56.594 05:31:59 -- nvmf/common.sh@124 -- # return 0 00:16:56.594 05:31:59 -- nvmf/common.sh@477 -- # '[' -n 1790874 ']' 00:16:56.594 05:31:59 -- nvmf/common.sh@478 -- # killprocess 1790874 00:16:56.594 05:31:59 -- common/autotest_common.sh@936 -- # '[' -z 1790874 ']' 00:16:56.594 05:31:59 -- common/autotest_common.sh@940 -- # kill -0 1790874 00:16:56.594 05:31:59 -- common/autotest_common.sh@941 -- # uname 00:16:56.594 05:31:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:56.594 05:31:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1790874 00:16:56.855 05:31:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:56.855 05:31:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:56.855 05:31:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1790874' 00:16:56.855 killing process with pid 1790874 00:16:56.855 05:31:59 -- common/autotest_common.sh@955 -- # kill 1790874 00:16:56.855 05:31:59 -- common/autotest_common.sh@960 -- # wait 1790874 00:16:56.855 05:31:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:56.855 05:31:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:56.855 05:31:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:56.855 05:31:59 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:56.855 05:31:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:56.855 05:31:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.855 05:31:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.855 05:31:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.398 05:32:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:59.398 00:16:59.398 real 0m43.286s 00:16:59.398 user 1m5.075s 00:16:59.398 sys 0m10.050s 00:16:59.398 05:32:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:59.398 05:32:02 -- common/autotest_common.sh@10 -- # set +x 00:16:59.398 ************************************ 00:16:59.398 END TEST nvmf_lvs_grow 00:16:59.398 ************************************ 00:16:59.398 05:32:02 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:59.398 05:32:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:59.398 05:32:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:59.398 05:32:02 -- common/autotest_common.sh@10 -- # set +x 00:16:59.398 ************************************ 00:16:59.398 START TEST nvmf_bdev_io_wait 00:16:59.398 ************************************ 00:16:59.398 05:32:02 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:59.398 * Looking for test storage... 00:16:59.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:59.398 05:32:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:59.398 05:32:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:59.398 05:32:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:59.398 05:32:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:59.398 05:32:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:59.398 05:32:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:59.398 05:32:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:59.398 05:32:02 -- scripts/common.sh@335 -- # IFS=.-: 00:16:59.398 05:32:02 -- scripts/common.sh@335 -- # read -ra ver1 00:16:59.398 05:32:02 -- scripts/common.sh@336 -- # IFS=.-: 00:16:59.398 05:32:02 -- scripts/common.sh@336 -- # read -ra ver2 00:16:59.398 05:32:02 -- scripts/common.sh@337 -- # local 'op=<' 00:16:59.398 05:32:02 -- scripts/common.sh@339 -- # ver1_l=2 00:16:59.398 05:32:02 -- scripts/common.sh@340 -- # ver2_l=1 00:16:59.398 05:32:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:59.398 05:32:02 -- scripts/common.sh@343 -- # case "$op" in 00:16:59.398 05:32:02 -- scripts/common.sh@344 -- # : 1 00:16:59.398 05:32:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:59.398 05:32:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:59.398 05:32:02 -- scripts/common.sh@364 -- # decimal 1 00:16:59.398 05:32:02 -- scripts/common.sh@352 -- # local d=1 00:16:59.398 05:32:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:59.398 05:32:02 -- scripts/common.sh@354 -- # echo 1 00:16:59.398 05:32:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:59.398 05:32:02 -- scripts/common.sh@365 -- # decimal 2 00:16:59.398 05:32:02 -- scripts/common.sh@352 -- # local d=2 00:16:59.398 05:32:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:59.398 05:32:02 -- scripts/common.sh@354 -- # echo 2 00:16:59.398 05:32:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:59.398 05:32:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:59.398 05:32:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:59.398 05:32:02 -- scripts/common.sh@367 -- # return 0 00:16:59.398 05:32:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:59.398 05:32:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:59.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.398 --rc genhtml_branch_coverage=1 00:16:59.398 --rc genhtml_function_coverage=1 00:16:59.398 --rc genhtml_legend=1 00:16:59.398 --rc geninfo_all_blocks=1 00:16:59.398 --rc geninfo_unexecuted_blocks=1 00:16:59.398 00:16:59.398 ' 00:16:59.398 05:32:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:59.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.398 --rc genhtml_branch_coverage=1 00:16:59.398 --rc genhtml_function_coverage=1 00:16:59.398 --rc genhtml_legend=1 00:16:59.398 --rc geninfo_all_blocks=1 00:16:59.398 --rc geninfo_unexecuted_blocks=1 00:16:59.398 00:16:59.398 ' 00:16:59.398 05:32:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:59.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.398 --rc genhtml_branch_coverage=1 00:16:59.398 --rc genhtml_function_coverage=1 00:16:59.398 --rc genhtml_legend=1 00:16:59.398 --rc geninfo_all_blocks=1 00:16:59.398 --rc geninfo_unexecuted_blocks=1 00:16:59.398 00:16:59.398 ' 00:16:59.398 05:32:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:59.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.398 --rc genhtml_branch_coverage=1 00:16:59.398 --rc genhtml_function_coverage=1 00:16:59.398 --rc genhtml_legend=1 00:16:59.398 --rc geninfo_all_blocks=1 00:16:59.398 --rc geninfo_unexecuted_blocks=1 00:16:59.398 00:16:59.398 ' 00:16:59.398 05:32:02 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:59.398 05:32:02 -- nvmf/common.sh@7 -- # uname -s 00:16:59.398 05:32:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:59.398 05:32:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:59.398 05:32:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:59.398 05:32:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:59.398 05:32:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:59.398 05:32:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:59.398 05:32:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:59.398 05:32:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:59.399 05:32:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:59.399 05:32:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:59.399 05:32:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:59.399 05:32:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:59.399 05:32:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:59.399 05:32:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:59.399 05:32:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:59.399 05:32:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:59.399 05:32:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:59.399 05:32:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:59.399 05:32:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:59.399 05:32:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.399 05:32:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.399 05:32:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.399 05:32:02 -- paths/export.sh@5 -- # export PATH 00:16:59.399 05:32:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.399 05:32:02 -- nvmf/common.sh@46 -- # : 0 00:16:59.399 05:32:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:59.399 05:32:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:59.399 05:32:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:59.399 05:32:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:59.399 05:32:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:59.399 05:32:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:59.399 05:32:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:59.399 05:32:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:59.399 05:32:02 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:59.399 05:32:02 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:59.399 05:32:02 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:16:59.399 05:32:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:59.399 05:32:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:59.399 05:32:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:59.399 05:32:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:59.399 05:32:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:59.399 05:32:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.399 05:32:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:59.399 05:32:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.399 05:32:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:59.399 05:32:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:59.399 05:32:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:59.399 05:32:02 -- common/autotest_common.sh@10 -- # set +x 00:17:07.539 05:32:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:07.539 05:32:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:07.539 05:32:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:07.539 05:32:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:07.539 05:32:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:07.539 05:32:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:07.539 05:32:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:07.539 05:32:09 -- nvmf/common.sh@294 -- # net_devs=() 00:17:07.539 05:32:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:07.539 05:32:09 -- nvmf/common.sh@295 -- # e810=() 00:17:07.539 05:32:09 -- nvmf/common.sh@295 -- # local -ga e810 00:17:07.539 05:32:09 -- nvmf/common.sh@296 -- # x722=() 00:17:07.539 05:32:09 -- nvmf/common.sh@296 -- # local -ga x722 00:17:07.539 05:32:09 -- nvmf/common.sh@297 -- # mlx=() 00:17:07.539 05:32:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:07.539 05:32:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:07.539 05:32:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:07.539 05:32:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:07.539 05:32:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:07.539 05:32:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:07.539 05:32:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:07.539 05:32:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:07.539 05:32:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:07.539 05:32:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:07.539 05:32:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:07.539 05:32:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:07.539 05:32:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:07.539 05:32:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:07.539 05:32:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:07.539 05:32:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:07.539 05:32:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:07.539 05:32:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:07.539 05:32:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:07.539 05:32:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:07.539 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:07.539 05:32:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:07.539 05:32:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:07.539 05:32:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:07.539 05:32:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:07.539 05:32:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:07.539 05:32:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:07.539 05:32:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:07.539 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:07.539 05:32:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:07.539 05:32:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:07.539 05:32:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:07.539 05:32:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:07.539 05:32:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:07.539 05:32:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:07.539 05:32:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:07.539 05:32:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:07.539 05:32:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:07.539 05:32:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:07.539 05:32:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:07.539 05:32:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:07.539 05:32:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:07.539 Found net devices under 0000:31:00.0: cvl_0_0 00:17:07.539 05:32:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:07.540 05:32:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:07.540 05:32:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:07.540 05:32:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:07.540 05:32:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:07.540 05:32:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:07.540 Found net devices under 0000:31:00.1: cvl_0_1 00:17:07.540 05:32:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:07.540 05:32:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:07.540 05:32:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:07.540 05:32:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:07.540 05:32:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:07.540 05:32:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:07.540 05:32:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:07.540 05:32:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:07.540 05:32:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:07.540 05:32:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:07.540 05:32:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:07.540 05:32:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:07.540 05:32:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:07.540 05:32:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:07.540 05:32:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:07.540 05:32:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:07.540 05:32:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:07.540 05:32:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:07.540 05:32:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:07.540 05:32:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:07.540 05:32:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:07.540 05:32:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:07.540 05:32:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:07.540 05:32:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:07.540 05:32:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:07.540 05:32:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:07.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:07.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.595 ms 00:17:07.540 00:17:07.540 --- 10.0.0.2 ping statistics --- 00:17:07.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.540 rtt min/avg/max/mdev = 0.595/0.595/0.595/0.000 ms 00:17:07.540 05:32:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:07.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:07.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:17:07.540 00:17:07.540 --- 10.0.0.1 ping statistics --- 00:17:07.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.540 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:17:07.540 05:32:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:07.540 05:32:09 -- nvmf/common.sh@410 -- # return 0 00:17:07.540 05:32:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:07.540 05:32:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:07.540 05:32:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:07.540 05:32:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:07.540 05:32:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:07.540 05:32:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:07.540 05:32:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:07.540 05:32:09 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:07.540 05:32:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:07.540 05:32:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:07.540 05:32:09 -- common/autotest_common.sh@10 -- # set +x 00:17:07.540 05:32:09 -- nvmf/common.sh@469 -- # nvmfpid=1796017 00:17:07.540 05:32:09 -- nvmf/common.sh@470 -- # waitforlisten 1796017 00:17:07.540 05:32:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:07.540 05:32:09 -- common/autotest_common.sh@829 -- # '[' -z 1796017 ']' 00:17:07.540 05:32:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.540 05:32:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:07.540 05:32:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.540 05:32:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:07.540 05:32:09 -- common/autotest_common.sh@10 -- # set +x 00:17:07.540 [2024-12-07 05:32:09.963769] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:07.540 [2024-12-07 05:32:09.963824] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:07.540 EAL: No free 2048 kB hugepages reported on node 1 00:17:07.540 [2024-12-07 05:32:10.035685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:07.540 [2024-12-07 05:32:10.103862] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:07.540 [2024-12-07 05:32:10.104002] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:07.540 [2024-12-07 05:32:10.104019] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:07.540 [2024-12-07 05:32:10.104028] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:07.540 [2024-12-07 05:32:10.104110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:07.540 [2024-12-07 05:32:10.104231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:07.540 [2024-12-07 05:32:10.104370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.540 [2024-12-07 05:32:10.104371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:07.540 05:32:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:07.540 05:32:10 -- common/autotest_common.sh@862 -- # return 0 00:17:07.540 05:32:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:07.540 05:32:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:07.540 05:32:10 -- common/autotest_common.sh@10 -- # set +x 00:17:07.800 05:32:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.800 05:32:10 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:07.800 05:32:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.800 05:32:10 -- common/autotest_common.sh@10 -- # set +x 00:17:07.800 05:32:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.800 05:32:10 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:07.800 05:32:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.800 05:32:10 -- common/autotest_common.sh@10 -- # set +x 00:17:07.800 05:32:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.800 05:32:10 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:07.800 05:32:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.800 05:32:10 -- common/autotest_common.sh@10 -- # set +x 00:17:07.800 [2024-12-07 05:32:10.856453] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:07.800 05:32:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.800 05:32:10 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:07.800 05:32:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.800 05:32:10 -- common/autotest_common.sh@10 -- # set +x 00:17:07.800 Malloc0 00:17:07.800 05:32:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.800 05:32:10 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:07.800 05:32:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.800 05:32:10 -- common/autotest_common.sh@10 -- # set +x 00:17:07.800 05:32:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.801 05:32:10 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:07.801 05:32:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.801 05:32:10 -- common/autotest_common.sh@10 -- # set +x 00:17:07.801 05:32:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.801 05:32:10 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:07.801 05:32:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.801 05:32:10 -- common/autotest_common.sh@10 -- # set +x 00:17:07.801 [2024-12-07 05:32:10.928328] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.801 05:32:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.801 05:32:10 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1796309 00:17:07.801 05:32:10 -- target/bdev_io_wait.sh@30 -- # READ_PID=1796312 00:17:07.801 05:32:10 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:07.801 05:32:10 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:07.801 05:32:10 -- nvmf/common.sh@520 -- # config=() 00:17:07.801 05:32:10 -- nvmf/common.sh@520 -- # local subsystem config 00:17:07.801 05:32:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:07.801 05:32:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:07.801 { 00:17:07.801 "params": { 00:17:07.801 "name": "Nvme$subsystem", 00:17:07.801 "trtype": "$TEST_TRANSPORT", 00:17:07.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:07.801 "adrfam": "ipv4", 00:17:07.801 "trsvcid": "$NVMF_PORT", 00:17:07.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:07.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:07.801 "hdgst": ${hdgst:-false}, 00:17:07.801 "ddgst": ${ddgst:-false} 00:17:07.801 }, 00:17:07.801 "method": "bdev_nvme_attach_controller" 00:17:07.801 } 00:17:07.801 EOF 00:17:07.801 )") 00:17:07.801 05:32:10 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1796315 00:17:07.801 05:32:10 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:07.801 05:32:10 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:07.801 05:32:10 -- nvmf/common.sh@520 -- # config=() 00:17:07.801 05:32:10 -- nvmf/common.sh@520 -- # local subsystem config 00:17:07.801 05:32:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:07.801 05:32:10 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1796319 00:17:07.801 05:32:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:07.801 { 00:17:07.801 "params": { 00:17:07.801 "name": "Nvme$subsystem", 00:17:07.801 "trtype": "$TEST_TRANSPORT", 00:17:07.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:07.801 "adrfam": "ipv4", 00:17:07.801 "trsvcid": "$NVMF_PORT", 00:17:07.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:07.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:07.801 "hdgst": ${hdgst:-false}, 00:17:07.801 "ddgst": ${ddgst:-false} 00:17:07.801 }, 00:17:07.801 "method": "bdev_nvme_attach_controller" 00:17:07.801 } 00:17:07.801 EOF 00:17:07.801 )") 00:17:07.801 05:32:10 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:07.801 05:32:10 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:07.801 05:32:10 -- target/bdev_io_wait.sh@35 -- # sync 00:17:07.801 05:32:10 -- nvmf/common.sh@520 -- # config=() 00:17:07.801 05:32:10 -- nvmf/common.sh@542 -- # cat 00:17:07.801 05:32:10 -- nvmf/common.sh@520 -- # local subsystem config 00:17:07.801 05:32:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:07.801 05:32:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:07.801 { 00:17:07.801 "params": { 00:17:07.801 "name": "Nvme$subsystem", 00:17:07.801 "trtype": "$TEST_TRANSPORT", 00:17:07.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:07.801 "adrfam": "ipv4", 00:17:07.801 "trsvcid": "$NVMF_PORT", 00:17:07.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:07.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:07.801 "hdgst": ${hdgst:-false}, 00:17:07.801 "ddgst": ${ddgst:-false} 00:17:07.801 }, 00:17:07.801 "method": "bdev_nvme_attach_controller" 00:17:07.801 } 00:17:07.801 EOF 00:17:07.801 )") 00:17:07.801 05:32:10 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:07.801 05:32:10 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:07.801 05:32:10 -- nvmf/common.sh@520 -- # config=() 00:17:07.801 05:32:10 -- nvmf/common.sh@520 -- # local subsystem config 00:17:07.801 05:32:10 -- nvmf/common.sh@542 -- # cat 00:17:07.801 05:32:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:07.801 05:32:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:07.801 { 00:17:07.801 "params": { 00:17:07.801 "name": "Nvme$subsystem", 00:17:07.801 "trtype": "$TEST_TRANSPORT", 00:17:07.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:07.801 "adrfam": "ipv4", 00:17:07.801 "trsvcid": "$NVMF_PORT", 00:17:07.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:07.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:07.801 "hdgst": ${hdgst:-false}, 00:17:07.801 "ddgst": ${ddgst:-false} 00:17:07.801 }, 00:17:07.801 "method": "bdev_nvme_attach_controller" 00:17:07.801 } 00:17:07.801 EOF 00:17:07.801 )") 00:17:07.801 05:32:10 -- nvmf/common.sh@542 -- # cat 00:17:07.801 05:32:10 -- target/bdev_io_wait.sh@37 -- # wait 1796309 00:17:07.801 05:32:10 -- nvmf/common.sh@542 -- # cat 00:17:07.801 05:32:10 -- nvmf/common.sh@544 -- # jq . 00:17:07.801 05:32:10 -- nvmf/common.sh@544 -- # jq . 00:17:07.801 05:32:10 -- nvmf/common.sh@544 -- # jq . 00:17:07.801 05:32:10 -- nvmf/common.sh@545 -- # IFS=, 00:17:07.801 05:32:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:07.801 "params": { 00:17:07.801 "name": "Nvme1", 00:17:07.801 "trtype": "tcp", 00:17:07.801 "traddr": "10.0.0.2", 00:17:07.801 "adrfam": "ipv4", 00:17:07.801 "trsvcid": "4420", 00:17:07.801 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:07.801 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:07.801 "hdgst": false, 00:17:07.801 "ddgst": false 00:17:07.801 }, 00:17:07.801 "method": "bdev_nvme_attach_controller" 00:17:07.801 }' 00:17:07.801 05:32:10 -- nvmf/common.sh@544 -- # jq . 00:17:07.801 05:32:10 -- nvmf/common.sh@545 -- # IFS=, 00:17:07.801 05:32:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:07.801 "params": { 00:17:07.801 "name": "Nvme1", 00:17:07.801 "trtype": "tcp", 00:17:07.801 "traddr": "10.0.0.2", 00:17:07.801 "adrfam": "ipv4", 00:17:07.801 "trsvcid": "4420", 00:17:07.801 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:07.801 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:07.801 "hdgst": false, 00:17:07.801 "ddgst": false 00:17:07.801 }, 00:17:07.801 "method": "bdev_nvme_attach_controller" 00:17:07.801 }' 00:17:07.801 05:32:10 -- nvmf/common.sh@545 -- # IFS=, 00:17:07.801 05:32:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:07.801 "params": { 00:17:07.801 "name": "Nvme1", 00:17:07.801 "trtype": "tcp", 00:17:07.801 "traddr": "10.0.0.2", 00:17:07.801 "adrfam": "ipv4", 00:17:07.801 "trsvcid": "4420", 00:17:07.801 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:07.801 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:07.801 "hdgst": false, 00:17:07.801 "ddgst": false 00:17:07.801 }, 00:17:07.801 "method": "bdev_nvme_attach_controller" 00:17:07.801 }' 00:17:07.801 05:32:10 -- nvmf/common.sh@545 -- # IFS=, 00:17:07.801 05:32:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:07.801 "params": { 00:17:07.801 "name": "Nvme1", 00:17:07.801 "trtype": "tcp", 00:17:07.801 "traddr": "10.0.0.2", 00:17:07.801 "adrfam": "ipv4", 00:17:07.801 "trsvcid": "4420", 00:17:07.801 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:07.801 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:07.801 "hdgst": false, 00:17:07.801 "ddgst": false 00:17:07.801 }, 00:17:07.801 "method": "bdev_nvme_attach_controller" 00:17:07.801 }' 00:17:07.801 [2024-12-07 05:32:10.977852] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:07.801 [2024-12-07 05:32:10.977906] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:07.801 [2024-12-07 05:32:10.979515] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:07.801 [2024-12-07 05:32:10.979562] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:07.801 [2024-12-07 05:32:10.980774] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:07.801 [2024-12-07 05:32:10.980822] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:07.801 [2024-12-07 05:32:10.982054] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:07.801 [2024-12-07 05:32:10.982099] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:07.801 EAL: No free 2048 kB hugepages reported on node 1 00:17:08.061 EAL: No free 2048 kB hugepages reported on node 1 00:17:08.061 [2024-12-07 05:32:11.123877] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.061 EAL: No free 2048 kB hugepages reported on node 1 00:17:08.061 [2024-12-07 05:32:11.172837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:08.061 [2024-12-07 05:32:11.183549] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.061 EAL: No free 2048 kB hugepages reported on node 1 00:17:08.061 [2024-12-07 05:32:11.229372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.061 [2024-12-07 05:32:11.232421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:08.061 [2024-12-07 05:32:11.277247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:08.061 [2024-12-07 05:32:11.280634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.322 [2024-12-07 05:32:11.328937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:08.322 Running I/O for 1 seconds... 00:17:08.322 Running I/O for 1 seconds... 00:17:08.322 Running I/O for 1 seconds... 00:17:08.583 Running I/O for 1 seconds... 00:17:09.153 00:17:09.153 Latency(us) 00:17:09.153 [2024-12-07T04:32:12.393Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.153 [2024-12-07T04:32:12.393Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:09.154 Nvme1n1 : 1.00 13970.32 54.57 0.00 0.00 9135.99 4560.21 18896.21 00:17:09.154 [2024-12-07T04:32:12.394Z] =================================================================================================================== 00:17:09.154 [2024-12-07T04:32:12.394Z] Total : 13970.32 54.57 0.00 0.00 9135.99 4560.21 18896.21 00:17:09.416 00:17:09.416 Latency(us) 00:17:09.416 [2024-12-07T04:32:12.656Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.416 [2024-12-07T04:32:12.656Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:09.416 Nvme1n1 : 1.01 8410.71 32.85 0.00 0.00 15116.23 8683.52 24794.45 00:17:09.416 [2024-12-07T04:32:12.656Z] =================================================================================================================== 00:17:09.416 [2024-12-07T04:32:12.656Z] Total : 8410.71 32.85 0.00 0.00 15116.23 8683.52 24794.45 00:17:09.416 05:32:12 -- target/bdev_io_wait.sh@38 -- # wait 1796312 00:17:09.416 00:17:09.416 Latency(us) 00:17:09.416 [2024-12-07T04:32:12.656Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.416 [2024-12-07T04:32:12.656Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:09.416 Nvme1n1 : 1.00 191533.20 748.18 0.00 0.00 665.89 264.53 761.17 00:17:09.416 [2024-12-07T04:32:12.656Z] =================================================================================================================== 00:17:09.416 [2024-12-07T04:32:12.656Z] Total : 191533.20 748.18 0.00 0.00 665.89 264.53 761.17 00:17:09.416 05:32:12 -- target/bdev_io_wait.sh@39 -- # wait 1796315 00:17:09.416 00:17:09.416 Latency(us) 00:17:09.416 [2024-12-07T04:32:12.656Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.416 [2024-12-07T04:32:12.656Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:09.416 Nvme1n1 : 1.00 9000.94 35.16 0.00 0.00 14184.35 4369.07 39540.05 00:17:09.416 [2024-12-07T04:32:12.656Z] =================================================================================================================== 00:17:09.416 [2024-12-07T04:32:12.656Z] Total : 9000.94 35.16 0.00 0.00 14184.35 4369.07 39540.05 00:17:09.678 05:32:12 -- target/bdev_io_wait.sh@40 -- # wait 1796319 00:17:09.678 05:32:12 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:09.678 05:32:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.678 05:32:12 -- common/autotest_common.sh@10 -- # set +x 00:17:09.678 05:32:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.678 05:32:12 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:09.678 05:32:12 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:09.678 05:32:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:09.678 05:32:12 -- nvmf/common.sh@116 -- # sync 00:17:09.678 05:32:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:09.678 05:32:12 -- nvmf/common.sh@119 -- # set +e 00:17:09.678 05:32:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:09.678 05:32:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:09.678 rmmod nvme_tcp 00:17:09.678 rmmod nvme_fabrics 00:17:09.678 rmmod nvme_keyring 00:17:09.678 05:32:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:09.678 05:32:12 -- nvmf/common.sh@123 -- # set -e 00:17:09.678 05:32:12 -- nvmf/common.sh@124 -- # return 0 00:17:09.678 05:32:12 -- nvmf/common.sh@477 -- # '[' -n 1796017 ']' 00:17:09.678 05:32:12 -- nvmf/common.sh@478 -- # killprocess 1796017 00:17:09.678 05:32:12 -- common/autotest_common.sh@936 -- # '[' -z 1796017 ']' 00:17:09.678 05:32:12 -- common/autotest_common.sh@940 -- # kill -0 1796017 00:17:09.678 05:32:12 -- common/autotest_common.sh@941 -- # uname 00:17:09.678 05:32:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:09.678 05:32:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1796017 00:17:09.678 05:32:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:09.678 05:32:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:09.678 05:32:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1796017' 00:17:09.678 killing process with pid 1796017 00:17:09.678 05:32:12 -- common/autotest_common.sh@955 -- # kill 1796017 00:17:09.678 05:32:12 -- common/autotest_common.sh@960 -- # wait 1796017 00:17:09.939 05:32:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:09.939 05:32:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:09.939 05:32:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:09.939 05:32:13 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:09.939 05:32:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:09.939 05:32:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.939 05:32:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:09.939 05:32:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.870 05:32:15 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:11.870 00:17:11.870 real 0m12.988s 00:17:11.870 user 0m19.105s 00:17:11.870 sys 0m7.084s 00:17:11.870 05:32:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:11.870 05:32:15 -- common/autotest_common.sh@10 -- # set +x 00:17:11.870 ************************************ 00:17:11.870 END TEST nvmf_bdev_io_wait 00:17:11.870 ************************************ 00:17:12.130 05:32:15 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:12.130 05:32:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:12.130 05:32:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:12.130 05:32:15 -- common/autotest_common.sh@10 -- # set +x 00:17:12.130 ************************************ 00:17:12.130 START TEST nvmf_queue_depth 00:17:12.130 ************************************ 00:17:12.130 05:32:15 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:12.130 * Looking for test storage... 00:17:12.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:12.130 05:32:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:12.130 05:32:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:12.130 05:32:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:12.130 05:32:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:12.130 05:32:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:12.130 05:32:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:12.130 05:32:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:12.130 05:32:15 -- scripts/common.sh@335 -- # IFS=.-: 00:17:12.130 05:32:15 -- scripts/common.sh@335 -- # read -ra ver1 00:17:12.130 05:32:15 -- scripts/common.sh@336 -- # IFS=.-: 00:17:12.130 05:32:15 -- scripts/common.sh@336 -- # read -ra ver2 00:17:12.130 05:32:15 -- scripts/common.sh@337 -- # local 'op=<' 00:17:12.130 05:32:15 -- scripts/common.sh@339 -- # ver1_l=2 00:17:12.130 05:32:15 -- scripts/common.sh@340 -- # ver2_l=1 00:17:12.130 05:32:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:12.130 05:32:15 -- scripts/common.sh@343 -- # case "$op" in 00:17:12.130 05:32:15 -- scripts/common.sh@344 -- # : 1 00:17:12.130 05:32:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:12.130 05:32:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:12.130 05:32:15 -- scripts/common.sh@364 -- # decimal 1 00:17:12.130 05:32:15 -- scripts/common.sh@352 -- # local d=1 00:17:12.130 05:32:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:12.130 05:32:15 -- scripts/common.sh@354 -- # echo 1 00:17:12.130 05:32:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:12.130 05:32:15 -- scripts/common.sh@365 -- # decimal 2 00:17:12.130 05:32:15 -- scripts/common.sh@352 -- # local d=2 00:17:12.130 05:32:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:12.130 05:32:15 -- scripts/common.sh@354 -- # echo 2 00:17:12.130 05:32:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:12.130 05:32:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:12.130 05:32:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:12.130 05:32:15 -- scripts/common.sh@367 -- # return 0 00:17:12.130 05:32:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:12.130 05:32:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:12.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.130 --rc genhtml_branch_coverage=1 00:17:12.130 --rc genhtml_function_coverage=1 00:17:12.130 --rc genhtml_legend=1 00:17:12.130 --rc geninfo_all_blocks=1 00:17:12.130 --rc geninfo_unexecuted_blocks=1 00:17:12.130 00:17:12.130 ' 00:17:12.130 05:32:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:12.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.130 --rc genhtml_branch_coverage=1 00:17:12.130 --rc genhtml_function_coverage=1 00:17:12.130 --rc genhtml_legend=1 00:17:12.130 --rc geninfo_all_blocks=1 00:17:12.130 --rc geninfo_unexecuted_blocks=1 00:17:12.130 00:17:12.130 ' 00:17:12.130 05:32:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:12.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.130 --rc genhtml_branch_coverage=1 00:17:12.130 --rc genhtml_function_coverage=1 00:17:12.130 --rc genhtml_legend=1 00:17:12.130 --rc geninfo_all_blocks=1 00:17:12.130 --rc geninfo_unexecuted_blocks=1 00:17:12.130 00:17:12.130 ' 00:17:12.130 05:32:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:12.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.130 --rc genhtml_branch_coverage=1 00:17:12.130 --rc genhtml_function_coverage=1 00:17:12.130 --rc genhtml_legend=1 00:17:12.130 --rc geninfo_all_blocks=1 00:17:12.130 --rc geninfo_unexecuted_blocks=1 00:17:12.130 00:17:12.130 ' 00:17:12.130 05:32:15 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:12.130 05:32:15 -- nvmf/common.sh@7 -- # uname -s 00:17:12.130 05:32:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:12.130 05:32:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:12.130 05:32:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:12.130 05:32:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:12.130 05:32:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:12.130 05:32:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:12.130 05:32:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:12.130 05:32:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:12.130 05:32:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:12.130 05:32:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:12.130 05:32:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:12.130 05:32:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:12.131 05:32:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:12.131 05:32:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:12.131 05:32:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:12.131 05:32:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:12.131 05:32:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:12.131 05:32:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:12.131 05:32:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:12.131 05:32:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.131 05:32:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.131 05:32:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.131 05:32:15 -- paths/export.sh@5 -- # export PATH 00:17:12.131 05:32:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.131 05:32:15 -- nvmf/common.sh@46 -- # : 0 00:17:12.131 05:32:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:12.131 05:32:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:12.131 05:32:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:12.131 05:32:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:12.131 05:32:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:12.131 05:32:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:12.131 05:32:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:12.131 05:32:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:12.390 05:32:15 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:12.390 05:32:15 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:12.390 05:32:15 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:12.390 05:32:15 -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:12.390 05:32:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:12.390 05:32:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:12.390 05:32:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:12.390 05:32:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:12.390 05:32:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:12.390 05:32:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.390 05:32:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:12.390 05:32:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.390 05:32:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:12.390 05:32:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:12.390 05:32:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:12.390 05:32:15 -- common/autotest_common.sh@10 -- # set +x 00:17:20.528 05:32:22 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:20.528 05:32:22 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:20.528 05:32:22 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:20.528 05:32:22 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:20.528 05:32:22 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:20.528 05:32:22 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:20.528 05:32:22 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:20.528 05:32:22 -- nvmf/common.sh@294 -- # net_devs=() 00:17:20.528 05:32:22 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:20.528 05:32:22 -- nvmf/common.sh@295 -- # e810=() 00:17:20.528 05:32:22 -- nvmf/common.sh@295 -- # local -ga e810 00:17:20.528 05:32:22 -- nvmf/common.sh@296 -- # x722=() 00:17:20.528 05:32:22 -- nvmf/common.sh@296 -- # local -ga x722 00:17:20.528 05:32:22 -- nvmf/common.sh@297 -- # mlx=() 00:17:20.528 05:32:22 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:20.528 05:32:22 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:20.528 05:32:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:20.528 05:32:22 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:20.528 05:32:22 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:20.528 05:32:22 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:20.528 05:32:22 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:20.528 05:32:22 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:20.528 05:32:22 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:20.528 05:32:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:20.528 05:32:22 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:20.528 05:32:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:20.528 05:32:22 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:20.528 05:32:22 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:20.528 05:32:22 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:20.528 05:32:22 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:20.528 05:32:22 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:20.528 05:32:22 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:20.528 05:32:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:20.528 05:32:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:20.528 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:20.528 05:32:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:20.528 05:32:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:20.528 05:32:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:20.528 05:32:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:20.528 05:32:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:20.528 05:32:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:20.528 05:32:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:20.528 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:20.528 05:32:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:20.528 05:32:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:20.528 05:32:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:20.528 05:32:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:20.528 05:32:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:20.528 05:32:22 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:20.528 05:32:22 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:20.528 05:32:22 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:20.528 05:32:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:20.528 05:32:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.528 05:32:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:20.528 05:32:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.528 05:32:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:20.528 Found net devices under 0000:31:00.0: cvl_0_0 00:17:20.528 05:32:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.528 05:32:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:20.528 05:32:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.528 05:32:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:20.528 05:32:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.528 05:32:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:20.528 Found net devices under 0000:31:00.1: cvl_0_1 00:17:20.528 05:32:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.528 05:32:22 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:20.528 05:32:22 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:20.528 05:32:22 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:20.528 05:32:22 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:20.528 05:32:22 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:20.528 05:32:22 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:20.528 05:32:22 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:20.528 05:32:22 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:20.528 05:32:22 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:20.528 05:32:22 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:20.528 05:32:22 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:20.528 05:32:22 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:20.528 05:32:22 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:20.528 05:32:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:20.528 05:32:22 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:20.528 05:32:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:20.528 05:32:22 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:20.528 05:32:22 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:20.528 05:32:22 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:20.528 05:32:22 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:20.528 05:32:22 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:20.528 05:32:22 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:20.528 05:32:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:20.528 05:32:22 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:20.528 05:32:22 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:20.528 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:20.528 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:17:20.528 00:17:20.528 --- 10.0.0.2 ping statistics --- 00:17:20.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.528 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:17:20.528 05:32:22 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:20.528 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:20.528 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:17:20.528 00:17:20.529 --- 10.0.0.1 ping statistics --- 00:17:20.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.529 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:17:20.529 05:32:22 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:20.529 05:32:22 -- nvmf/common.sh@410 -- # return 0 00:17:20.529 05:32:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:20.529 05:32:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:20.529 05:32:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:20.529 05:32:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:20.529 05:32:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:20.529 05:32:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:20.529 05:32:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:20.529 05:32:22 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:20.529 05:32:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:20.529 05:32:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:20.529 05:32:22 -- common/autotest_common.sh@10 -- # set +x 00:17:20.529 05:32:22 -- nvmf/common.sh@469 -- # nvmfpid=1800930 00:17:20.529 05:32:22 -- nvmf/common.sh@470 -- # waitforlisten 1800930 00:17:20.529 05:32:22 -- common/autotest_common.sh@829 -- # '[' -z 1800930 ']' 00:17:20.529 05:32:22 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:20.529 05:32:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.529 05:32:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.529 05:32:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.529 05:32:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.529 05:32:22 -- common/autotest_common.sh@10 -- # set +x 00:17:20.529 [2024-12-07 05:32:22.984601] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:20.529 [2024-12-07 05:32:22.984650] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.529 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.529 [2024-12-07 05:32:23.069377] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.529 [2024-12-07 05:32:23.131957] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:20.529 [2024-12-07 05:32:23.132083] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.529 [2024-12-07 05:32:23.132092] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.529 [2024-12-07 05:32:23.132099] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.529 [2024-12-07 05:32:23.132127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.790 05:32:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:20.790 05:32:23 -- common/autotest_common.sh@862 -- # return 0 00:17:20.790 05:32:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:20.790 05:32:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:20.790 05:32:23 -- common/autotest_common.sh@10 -- # set +x 00:17:20.790 05:32:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:20.790 05:32:23 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:20.790 05:32:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.790 05:32:23 -- common/autotest_common.sh@10 -- # set +x 00:17:20.790 [2024-12-07 05:32:23.859724] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:20.790 05:32:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.790 05:32:23 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:20.790 05:32:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.790 05:32:23 -- common/autotest_common.sh@10 -- # set +x 00:17:20.790 Malloc0 00:17:20.790 05:32:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.790 05:32:23 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:20.790 05:32:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.790 05:32:23 -- common/autotest_common.sh@10 -- # set +x 00:17:20.790 05:32:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.790 05:32:23 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:20.790 05:32:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.790 05:32:23 -- common/autotest_common.sh@10 -- # set +x 00:17:20.790 05:32:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.790 05:32:23 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:20.790 05:32:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.790 05:32:23 -- common/autotest_common.sh@10 -- # set +x 00:17:20.790 [2024-12-07 05:32:23.916750] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:20.790 05:32:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.790 05:32:23 -- target/queue_depth.sh@30 -- # bdevperf_pid=1801175 00:17:20.790 05:32:23 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:20.790 05:32:23 -- target/queue_depth.sh@33 -- # waitforlisten 1801175 /var/tmp/bdevperf.sock 00:17:20.790 05:32:23 -- common/autotest_common.sh@829 -- # '[' -z 1801175 ']' 00:17:20.790 05:32:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:20.790 05:32:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.790 05:32:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:20.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:20.790 05:32:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.790 05:32:23 -- common/autotest_common.sh@10 -- # set +x 00:17:20.790 05:32:23 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:20.790 [2024-12-07 05:32:23.974860] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:20.790 [2024-12-07 05:32:23.974929] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1801175 ] 00:17:20.790 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.049 [2024-12-07 05:32:24.041832] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.049 [2024-12-07 05:32:24.110402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.617 05:32:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:21.617 05:32:24 -- common/autotest_common.sh@862 -- # return 0 00:17:21.617 05:32:24 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:21.617 05:32:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.617 05:32:24 -- common/autotest_common.sh@10 -- # set +x 00:17:21.617 NVMe0n1 00:17:21.617 05:32:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.617 05:32:24 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:21.877 Running I/O for 10 seconds... 00:17:31.867 00:17:31.867 Latency(us) 00:17:31.867 [2024-12-07T04:32:35.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.867 [2024-12-07T04:32:35.107Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:31.867 Verification LBA range: start 0x0 length 0x4000 00:17:31.867 NVMe0n1 : 10.04 19405.43 75.80 0.00 0.00 52617.53 9721.17 52428.80 00:17:31.867 [2024-12-07T04:32:35.107Z] =================================================================================================================== 00:17:31.867 [2024-12-07T04:32:35.107Z] Total : 19405.43 75.80 0.00 0.00 52617.53 9721.17 52428.80 00:17:31.867 0 00:17:31.867 05:32:34 -- target/queue_depth.sh@39 -- # killprocess 1801175 00:17:31.867 05:32:34 -- common/autotest_common.sh@936 -- # '[' -z 1801175 ']' 00:17:31.867 05:32:34 -- common/autotest_common.sh@940 -- # kill -0 1801175 00:17:31.867 05:32:34 -- common/autotest_common.sh@941 -- # uname 00:17:31.867 05:32:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:31.867 05:32:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1801175 00:17:31.867 05:32:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:31.867 05:32:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:31.867 05:32:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1801175' 00:17:31.867 killing process with pid 1801175 00:17:31.867 05:32:35 -- common/autotest_common.sh@955 -- # kill 1801175 00:17:31.867 Received shutdown signal, test time was about 10.000000 seconds 00:17:31.867 00:17:31.867 Latency(us) 00:17:31.867 [2024-12-07T04:32:35.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.867 [2024-12-07T04:32:35.107Z] =================================================================================================================== 00:17:31.867 [2024-12-07T04:32:35.107Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:31.867 05:32:35 -- common/autotest_common.sh@960 -- # wait 1801175 00:17:32.126 05:32:35 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:32.126 05:32:35 -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:32.126 05:32:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:32.126 05:32:35 -- nvmf/common.sh@116 -- # sync 00:17:32.126 05:32:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:32.126 05:32:35 -- nvmf/common.sh@119 -- # set +e 00:17:32.126 05:32:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:32.126 05:32:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:32.126 rmmod nvme_tcp 00:17:32.126 rmmod nvme_fabrics 00:17:32.126 rmmod nvme_keyring 00:17:32.126 05:32:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:32.126 05:32:35 -- nvmf/common.sh@123 -- # set -e 00:17:32.126 05:32:35 -- nvmf/common.sh@124 -- # return 0 00:17:32.126 05:32:35 -- nvmf/common.sh@477 -- # '[' -n 1800930 ']' 00:17:32.126 05:32:35 -- nvmf/common.sh@478 -- # killprocess 1800930 00:17:32.126 05:32:35 -- common/autotest_common.sh@936 -- # '[' -z 1800930 ']' 00:17:32.126 05:32:35 -- common/autotest_common.sh@940 -- # kill -0 1800930 00:17:32.126 05:32:35 -- common/autotest_common.sh@941 -- # uname 00:17:32.126 05:32:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:32.126 05:32:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1800930 00:17:32.126 05:32:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:32.126 05:32:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:32.126 05:32:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1800930' 00:17:32.126 killing process with pid 1800930 00:17:32.126 05:32:35 -- common/autotest_common.sh@955 -- # kill 1800930 00:17:32.126 05:32:35 -- common/autotest_common.sh@960 -- # wait 1800930 00:17:32.386 05:32:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:32.386 05:32:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:32.386 05:32:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:32.386 05:32:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:32.386 05:32:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:32.386 05:32:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.386 05:32:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:32.386 05:32:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.292 05:32:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:34.292 00:17:34.292 real 0m22.367s 00:17:34.292 user 0m25.643s 00:17:34.292 sys 0m6.753s 00:17:34.292 05:32:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:34.292 05:32:37 -- common/autotest_common.sh@10 -- # set +x 00:17:34.292 ************************************ 00:17:34.292 END TEST nvmf_queue_depth 00:17:34.292 ************************************ 00:17:34.553 05:32:37 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:34.553 05:32:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:34.553 05:32:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:34.553 05:32:37 -- common/autotest_common.sh@10 -- # set +x 00:17:34.553 ************************************ 00:17:34.553 START TEST nvmf_multipath 00:17:34.553 ************************************ 00:17:34.553 05:32:37 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:34.553 * Looking for test storage... 00:17:34.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:34.553 05:32:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:34.553 05:32:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:34.553 05:32:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:34.553 05:32:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:34.553 05:32:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:34.553 05:32:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:34.553 05:32:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:34.553 05:32:37 -- scripts/common.sh@335 -- # IFS=.-: 00:17:34.553 05:32:37 -- scripts/common.sh@335 -- # read -ra ver1 00:17:34.553 05:32:37 -- scripts/common.sh@336 -- # IFS=.-: 00:17:34.553 05:32:37 -- scripts/common.sh@336 -- # read -ra ver2 00:17:34.553 05:32:37 -- scripts/common.sh@337 -- # local 'op=<' 00:17:34.553 05:32:37 -- scripts/common.sh@339 -- # ver1_l=2 00:17:34.553 05:32:37 -- scripts/common.sh@340 -- # ver2_l=1 00:17:34.553 05:32:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:34.553 05:32:37 -- scripts/common.sh@343 -- # case "$op" in 00:17:34.553 05:32:37 -- scripts/common.sh@344 -- # : 1 00:17:34.553 05:32:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:34.553 05:32:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:34.553 05:32:37 -- scripts/common.sh@364 -- # decimal 1 00:17:34.553 05:32:37 -- scripts/common.sh@352 -- # local d=1 00:17:34.553 05:32:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:34.553 05:32:37 -- scripts/common.sh@354 -- # echo 1 00:17:34.553 05:32:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:34.553 05:32:37 -- scripts/common.sh@365 -- # decimal 2 00:17:34.553 05:32:37 -- scripts/common.sh@352 -- # local d=2 00:17:34.553 05:32:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:34.553 05:32:37 -- scripts/common.sh@354 -- # echo 2 00:17:34.553 05:32:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:34.553 05:32:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:34.553 05:32:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:34.553 05:32:37 -- scripts/common.sh@367 -- # return 0 00:17:34.553 05:32:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:34.553 05:32:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:34.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.553 --rc genhtml_branch_coverage=1 00:17:34.553 --rc genhtml_function_coverage=1 00:17:34.553 --rc genhtml_legend=1 00:17:34.553 --rc geninfo_all_blocks=1 00:17:34.553 --rc geninfo_unexecuted_blocks=1 00:17:34.553 00:17:34.553 ' 00:17:34.553 05:32:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:34.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.553 --rc genhtml_branch_coverage=1 00:17:34.553 --rc genhtml_function_coverage=1 00:17:34.553 --rc genhtml_legend=1 00:17:34.553 --rc geninfo_all_blocks=1 00:17:34.553 --rc geninfo_unexecuted_blocks=1 00:17:34.553 00:17:34.553 ' 00:17:34.553 05:32:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:34.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.553 --rc genhtml_branch_coverage=1 00:17:34.553 --rc genhtml_function_coverage=1 00:17:34.553 --rc genhtml_legend=1 00:17:34.553 --rc geninfo_all_blocks=1 00:17:34.553 --rc geninfo_unexecuted_blocks=1 00:17:34.553 00:17:34.553 ' 00:17:34.553 05:32:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:34.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.553 --rc genhtml_branch_coverage=1 00:17:34.553 --rc genhtml_function_coverage=1 00:17:34.553 --rc genhtml_legend=1 00:17:34.553 --rc geninfo_all_blocks=1 00:17:34.553 --rc geninfo_unexecuted_blocks=1 00:17:34.553 00:17:34.553 ' 00:17:34.553 05:32:37 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:34.553 05:32:37 -- nvmf/common.sh@7 -- # uname -s 00:17:34.553 05:32:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:34.553 05:32:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:34.553 05:32:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:34.553 05:32:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:34.553 05:32:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:34.553 05:32:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:34.553 05:32:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:34.553 05:32:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:34.553 05:32:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:34.553 05:32:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:34.553 05:32:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:34.553 05:32:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:34.553 05:32:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:34.553 05:32:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:34.553 05:32:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:34.553 05:32:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:34.553 05:32:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:34.553 05:32:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:34.553 05:32:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:34.554 05:32:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.554 05:32:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.554 05:32:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.554 05:32:37 -- paths/export.sh@5 -- # export PATH 00:17:34.554 05:32:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.554 05:32:37 -- nvmf/common.sh@46 -- # : 0 00:17:34.554 05:32:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:34.554 05:32:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:34.554 05:32:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:34.554 05:32:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:34.554 05:32:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:34.554 05:32:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:34.554 05:32:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:34.554 05:32:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:34.554 05:32:37 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:34.554 05:32:37 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:34.554 05:32:37 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:34.554 05:32:37 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:34.554 05:32:37 -- target/multipath.sh@43 -- # nvmftestinit 00:17:34.554 05:32:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:34.554 05:32:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:34.554 05:32:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:34.554 05:32:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:34.814 05:32:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:34.814 05:32:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.814 05:32:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:34.814 05:32:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.814 05:32:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:34.814 05:32:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:34.814 05:32:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:34.814 05:32:37 -- common/autotest_common.sh@10 -- # set +x 00:17:43.164 05:32:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:43.164 05:32:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:43.164 05:32:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:43.164 05:32:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:43.164 05:32:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:43.164 05:32:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:43.164 05:32:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:43.164 05:32:44 -- nvmf/common.sh@294 -- # net_devs=() 00:17:43.164 05:32:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:43.164 05:32:44 -- nvmf/common.sh@295 -- # e810=() 00:17:43.164 05:32:44 -- nvmf/common.sh@295 -- # local -ga e810 00:17:43.164 05:32:44 -- nvmf/common.sh@296 -- # x722=() 00:17:43.164 05:32:44 -- nvmf/common.sh@296 -- # local -ga x722 00:17:43.164 05:32:44 -- nvmf/common.sh@297 -- # mlx=() 00:17:43.164 05:32:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:43.164 05:32:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:43.164 05:32:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:43.164 05:32:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:43.164 05:32:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:43.164 05:32:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:43.164 05:32:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:43.164 05:32:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:43.164 05:32:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:43.164 05:32:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:43.164 05:32:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:43.164 05:32:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:43.164 05:32:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:43.164 05:32:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:43.164 05:32:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:43.164 05:32:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:43.164 05:32:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:43.164 05:32:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:43.164 05:32:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:43.164 05:32:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:43.164 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:43.164 05:32:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:43.164 05:32:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:43.164 05:32:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.164 05:32:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.164 05:32:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:43.164 05:32:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:43.164 05:32:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:43.164 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:43.164 05:32:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:43.164 05:32:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:43.164 05:32:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.164 05:32:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.164 05:32:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:43.164 05:32:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:43.164 05:32:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:43.164 05:32:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:43.164 05:32:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:43.164 05:32:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.164 05:32:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:43.164 05:32:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.164 05:32:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:43.164 Found net devices under 0000:31:00.0: cvl_0_0 00:17:43.164 05:32:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.164 05:32:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:43.164 05:32:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.164 05:32:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:43.164 05:32:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.164 05:32:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:43.164 Found net devices under 0000:31:00.1: cvl_0_1 00:17:43.164 05:32:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.164 05:32:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:43.164 05:32:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:43.164 05:32:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:43.164 05:32:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:43.164 05:32:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:43.164 05:32:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:43.164 05:32:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:43.165 05:32:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:43.165 05:32:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:43.165 05:32:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:43.165 05:32:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:43.165 05:32:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:43.165 05:32:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:43.165 05:32:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:43.165 05:32:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:43.165 05:32:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:43.165 05:32:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:43.165 05:32:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:43.165 05:32:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:43.165 05:32:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:43.165 05:32:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:43.165 05:32:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:43.165 05:32:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:43.165 05:32:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:43.165 05:32:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:43.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:43.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:17:43.165 00:17:43.165 --- 10.0.0.2 ping statistics --- 00:17:43.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.165 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:17:43.165 05:32:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:43.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:43.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:17:43.165 00:17:43.165 --- 10.0.0.1 ping statistics --- 00:17:43.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.165 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:17:43.165 05:32:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:43.165 05:32:45 -- nvmf/common.sh@410 -- # return 0 00:17:43.165 05:32:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:43.165 05:32:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:43.165 05:32:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:43.165 05:32:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:43.165 05:32:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:43.165 05:32:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:43.165 05:32:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:43.165 05:32:45 -- target/multipath.sh@45 -- # '[' -z ']' 00:17:43.165 05:32:45 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:17:43.165 only one NIC for nvmf test 00:17:43.165 05:32:45 -- target/multipath.sh@47 -- # nvmftestfini 00:17:43.165 05:32:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:43.165 05:32:45 -- nvmf/common.sh@116 -- # sync 00:17:43.165 05:32:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:43.165 05:32:45 -- nvmf/common.sh@119 -- # set +e 00:17:43.165 05:32:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:43.165 05:32:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:43.165 rmmod nvme_tcp 00:17:43.165 rmmod nvme_fabrics 00:17:43.165 rmmod nvme_keyring 00:17:43.165 05:32:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:43.165 05:32:45 -- nvmf/common.sh@123 -- # set -e 00:17:43.165 05:32:45 -- nvmf/common.sh@124 -- # return 0 00:17:43.165 05:32:45 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:17:43.165 05:32:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:43.165 05:32:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:43.165 05:32:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:43.165 05:32:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:43.165 05:32:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:43.165 05:32:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.165 05:32:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:43.165 05:32:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.548 05:32:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:44.548 05:32:47 -- target/multipath.sh@48 -- # exit 0 00:17:44.548 05:32:47 -- target/multipath.sh@1 -- # nvmftestfini 00:17:44.548 05:32:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:44.548 05:32:47 -- nvmf/common.sh@116 -- # sync 00:17:44.548 05:32:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:44.548 05:32:47 -- nvmf/common.sh@119 -- # set +e 00:17:44.548 05:32:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:44.548 05:32:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:44.548 05:32:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:44.548 05:32:47 -- nvmf/common.sh@123 -- # set -e 00:17:44.548 05:32:47 -- nvmf/common.sh@124 -- # return 0 00:17:44.549 05:32:47 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:17:44.549 05:32:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:44.549 05:32:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:44.549 05:32:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:44.549 05:32:47 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:44.549 05:32:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:44.549 05:32:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.549 05:32:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:44.549 05:32:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.549 05:32:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:44.549 00:17:44.549 real 0m9.905s 00:17:44.549 user 0m2.041s 00:17:44.549 sys 0m5.769s 00:17:44.549 05:32:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:44.549 05:32:47 -- common/autotest_common.sh@10 -- # set +x 00:17:44.549 ************************************ 00:17:44.549 END TEST nvmf_multipath 00:17:44.549 ************************************ 00:17:44.549 05:32:47 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:44.549 05:32:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:44.549 05:32:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:44.549 05:32:47 -- common/autotest_common.sh@10 -- # set +x 00:17:44.549 ************************************ 00:17:44.549 START TEST nvmf_zcopy 00:17:44.549 ************************************ 00:17:44.549 05:32:47 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:44.549 * Looking for test storage... 00:17:44.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:44.549 05:32:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:44.549 05:32:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:44.549 05:32:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:44.549 05:32:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:44.549 05:32:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:44.549 05:32:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:44.549 05:32:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:44.549 05:32:47 -- scripts/common.sh@335 -- # IFS=.-: 00:17:44.549 05:32:47 -- scripts/common.sh@335 -- # read -ra ver1 00:17:44.549 05:32:47 -- scripts/common.sh@336 -- # IFS=.-: 00:17:44.549 05:32:47 -- scripts/common.sh@336 -- # read -ra ver2 00:17:44.549 05:32:47 -- scripts/common.sh@337 -- # local 'op=<' 00:17:44.549 05:32:47 -- scripts/common.sh@339 -- # ver1_l=2 00:17:44.549 05:32:47 -- scripts/common.sh@340 -- # ver2_l=1 00:17:44.549 05:32:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:44.549 05:32:47 -- scripts/common.sh@343 -- # case "$op" in 00:17:44.549 05:32:47 -- scripts/common.sh@344 -- # : 1 00:17:44.549 05:32:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:44.549 05:32:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:44.549 05:32:47 -- scripts/common.sh@364 -- # decimal 1 00:17:44.549 05:32:47 -- scripts/common.sh@352 -- # local d=1 00:17:44.549 05:32:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:44.549 05:32:47 -- scripts/common.sh@354 -- # echo 1 00:17:44.549 05:32:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:44.549 05:32:47 -- scripts/common.sh@365 -- # decimal 2 00:17:44.549 05:32:47 -- scripts/common.sh@352 -- # local d=2 00:17:44.549 05:32:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:44.549 05:32:47 -- scripts/common.sh@354 -- # echo 2 00:17:44.549 05:32:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:44.549 05:32:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:44.549 05:32:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:44.549 05:32:47 -- scripts/common.sh@367 -- # return 0 00:17:44.549 05:32:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:44.549 05:32:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:44.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.549 --rc genhtml_branch_coverage=1 00:17:44.549 --rc genhtml_function_coverage=1 00:17:44.549 --rc genhtml_legend=1 00:17:44.549 --rc geninfo_all_blocks=1 00:17:44.549 --rc geninfo_unexecuted_blocks=1 00:17:44.549 00:17:44.549 ' 00:17:44.549 05:32:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:44.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.549 --rc genhtml_branch_coverage=1 00:17:44.549 --rc genhtml_function_coverage=1 00:17:44.549 --rc genhtml_legend=1 00:17:44.549 --rc geninfo_all_blocks=1 00:17:44.549 --rc geninfo_unexecuted_blocks=1 00:17:44.549 00:17:44.549 ' 00:17:44.549 05:32:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:44.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.549 --rc genhtml_branch_coverage=1 00:17:44.549 --rc genhtml_function_coverage=1 00:17:44.549 --rc genhtml_legend=1 00:17:44.549 --rc geninfo_all_blocks=1 00:17:44.549 --rc geninfo_unexecuted_blocks=1 00:17:44.549 00:17:44.549 ' 00:17:44.549 05:32:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:44.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.549 --rc genhtml_branch_coverage=1 00:17:44.549 --rc genhtml_function_coverage=1 00:17:44.549 --rc genhtml_legend=1 00:17:44.549 --rc geninfo_all_blocks=1 00:17:44.549 --rc geninfo_unexecuted_blocks=1 00:17:44.549 00:17:44.549 ' 00:17:44.549 05:32:47 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:44.549 05:32:47 -- nvmf/common.sh@7 -- # uname -s 00:17:44.549 05:32:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:44.549 05:32:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:44.549 05:32:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:44.549 05:32:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:44.549 05:32:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:44.549 05:32:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:44.549 05:32:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:44.549 05:32:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:44.549 05:32:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:44.549 05:32:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:44.549 05:32:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:44.549 05:32:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:44.549 05:32:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:44.549 05:32:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:44.549 05:32:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:44.549 05:32:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:44.549 05:32:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:44.549 05:32:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:44.549 05:32:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:44.549 05:32:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.549 05:32:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.549 05:32:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.549 05:32:47 -- paths/export.sh@5 -- # export PATH 00:17:44.549 05:32:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.549 05:32:47 -- nvmf/common.sh@46 -- # : 0 00:17:44.549 05:32:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:44.549 05:32:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:44.549 05:32:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:44.549 05:32:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:44.549 05:32:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:44.549 05:32:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:44.549 05:32:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:44.549 05:32:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:44.549 05:32:47 -- target/zcopy.sh@12 -- # nvmftestinit 00:17:44.549 05:32:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:44.549 05:32:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:44.549 05:32:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:44.549 05:32:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:44.549 05:32:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:44.549 05:32:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.549 05:32:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:44.549 05:32:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.549 05:32:47 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:44.549 05:32:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:44.549 05:32:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:44.549 05:32:47 -- common/autotest_common.sh@10 -- # set +x 00:17:52.695 05:32:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:52.695 05:32:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:52.695 05:32:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:52.695 05:32:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:52.695 05:32:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:52.695 05:32:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:52.695 05:32:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:52.695 05:32:54 -- nvmf/common.sh@294 -- # net_devs=() 00:17:52.695 05:32:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:52.695 05:32:54 -- nvmf/common.sh@295 -- # e810=() 00:17:52.695 05:32:54 -- nvmf/common.sh@295 -- # local -ga e810 00:17:52.695 05:32:54 -- nvmf/common.sh@296 -- # x722=() 00:17:52.695 05:32:54 -- nvmf/common.sh@296 -- # local -ga x722 00:17:52.695 05:32:54 -- nvmf/common.sh@297 -- # mlx=() 00:17:52.695 05:32:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:52.695 05:32:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:52.695 05:32:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:52.695 05:32:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:52.695 05:32:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:52.695 05:32:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:52.695 05:32:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:52.695 05:32:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:52.695 05:32:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:52.695 05:32:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:52.695 05:32:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:52.695 05:32:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:52.695 05:32:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:52.695 05:32:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:52.695 05:32:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:52.695 05:32:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:52.695 05:32:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:52.695 05:32:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:52.695 05:32:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:52.695 05:32:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:52.695 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:52.695 05:32:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:52.695 05:32:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:52.695 05:32:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:52.695 05:32:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:52.695 05:32:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:52.695 05:32:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:52.695 05:32:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:52.695 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:52.695 05:32:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:52.695 05:32:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:52.695 05:32:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:52.695 05:32:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:52.695 05:32:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:52.695 05:32:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:52.695 05:32:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:52.695 05:32:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:52.695 05:32:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:52.695 05:32:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:52.695 05:32:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:52.695 05:32:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:52.695 05:32:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:52.695 Found net devices under 0000:31:00.0: cvl_0_0 00:17:52.695 05:32:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:52.695 05:32:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:52.695 05:32:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:52.695 05:32:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:52.695 05:32:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:52.695 05:32:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:52.695 Found net devices under 0000:31:00.1: cvl_0_1 00:17:52.695 05:32:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:52.695 05:32:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:52.695 05:32:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:52.695 05:32:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:52.695 05:32:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:52.695 05:32:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:52.695 05:32:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:52.695 05:32:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:52.695 05:32:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:52.695 05:32:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:52.695 05:32:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:52.695 05:32:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:52.695 05:32:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:52.695 05:32:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:52.695 05:32:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:52.695 05:32:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:52.695 05:32:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:52.695 05:32:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:52.695 05:32:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:52.695 05:32:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:52.695 05:32:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:52.695 05:32:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:52.695 05:32:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:52.695 05:32:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:52.695 05:32:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:52.695 05:32:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:52.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:52.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.535 ms 00:17:52.695 00:17:52.695 --- 10.0.0.2 ping statistics --- 00:17:52.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.695 rtt min/avg/max/mdev = 0.535/0.535/0.535/0.000 ms 00:17:52.695 05:32:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:52.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:52.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:17:52.695 00:17:52.695 --- 10.0.0.1 ping statistics --- 00:17:52.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.695 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:17:52.695 05:32:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:52.695 05:32:54 -- nvmf/common.sh@410 -- # return 0 00:17:52.695 05:32:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:52.695 05:32:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:52.695 05:32:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:52.695 05:32:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:52.695 05:32:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:52.695 05:32:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:52.695 05:32:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:52.695 05:32:54 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:52.695 05:32:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:52.695 05:32:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:52.695 05:32:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.695 05:32:54 -- nvmf/common.sh@469 -- # nvmfpid=1812034 00:17:52.695 05:32:54 -- nvmf/common.sh@470 -- # waitforlisten 1812034 00:17:52.695 05:32:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:52.695 05:32:54 -- common/autotest_common.sh@829 -- # '[' -z 1812034 ']' 00:17:52.695 05:32:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.695 05:32:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:52.695 05:32:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.695 05:32:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:52.695 05:32:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.695 [2024-12-07 05:32:54.912102] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:52.695 [2024-12-07 05:32:54.912164] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.695 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.695 [2024-12-07 05:32:55.003170] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.695 [2024-12-07 05:32:55.092873] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:52.695 [2024-12-07 05:32:55.093033] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.695 [2024-12-07 05:32:55.093043] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.695 [2024-12-07 05:32:55.093052] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.695 [2024-12-07 05:32:55.093077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.695 05:32:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:52.695 05:32:55 -- common/autotest_common.sh@862 -- # return 0 00:17:52.695 05:32:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:52.695 05:32:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:52.695 05:32:55 -- common/autotest_common.sh@10 -- # set +x 00:17:52.695 05:32:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.695 05:32:55 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:17:52.695 05:32:55 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:17:52.695 05:32:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.695 05:32:55 -- common/autotest_common.sh@10 -- # set +x 00:17:52.695 [2024-12-07 05:32:55.755942] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:52.695 05:32:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.695 05:32:55 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:52.695 05:32:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.695 05:32:55 -- common/autotest_common.sh@10 -- # set +x 00:17:52.695 05:32:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.695 05:32:55 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:52.695 05:32:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.695 05:32:55 -- common/autotest_common.sh@10 -- # set +x 00:17:52.695 [2024-12-07 05:32:55.780237] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:52.695 05:32:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.695 05:32:55 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:52.695 05:32:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.695 05:32:55 -- common/autotest_common.sh@10 -- # set +x 00:17:52.695 05:32:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.695 05:32:55 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:17:52.695 05:32:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.695 05:32:55 -- common/autotest_common.sh@10 -- # set +x 00:17:52.695 malloc0 00:17:52.695 05:32:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.695 05:32:55 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:52.695 05:32:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.695 05:32:55 -- common/autotest_common.sh@10 -- # set +x 00:17:52.695 05:32:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.695 05:32:55 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:17:52.695 05:32:55 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:17:52.695 05:32:55 -- nvmf/common.sh@520 -- # config=() 00:17:52.695 05:32:55 -- nvmf/common.sh@520 -- # local subsystem config 00:17:52.695 05:32:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:52.695 05:32:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:52.695 { 00:17:52.695 "params": { 00:17:52.695 "name": "Nvme$subsystem", 00:17:52.695 "trtype": "$TEST_TRANSPORT", 00:17:52.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:52.695 "adrfam": "ipv4", 00:17:52.695 "trsvcid": "$NVMF_PORT", 00:17:52.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:52.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:52.695 "hdgst": ${hdgst:-false}, 00:17:52.695 "ddgst": ${ddgst:-false} 00:17:52.695 }, 00:17:52.695 "method": "bdev_nvme_attach_controller" 00:17:52.695 } 00:17:52.695 EOF 00:17:52.695 )") 00:17:52.695 05:32:55 -- nvmf/common.sh@542 -- # cat 00:17:52.695 05:32:55 -- nvmf/common.sh@544 -- # jq . 00:17:52.695 05:32:55 -- nvmf/common.sh@545 -- # IFS=, 00:17:52.695 05:32:55 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:52.695 "params": { 00:17:52.695 "name": "Nvme1", 00:17:52.695 "trtype": "tcp", 00:17:52.695 "traddr": "10.0.0.2", 00:17:52.695 "adrfam": "ipv4", 00:17:52.695 "trsvcid": "4420", 00:17:52.695 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:52.695 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:52.695 "hdgst": false, 00:17:52.695 "ddgst": false 00:17:52.695 }, 00:17:52.695 "method": "bdev_nvme_attach_controller" 00:17:52.695 }' 00:17:52.695 [2024-12-07 05:32:55.874560] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:52.695 [2024-12-07 05:32:55.874623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1812068 ] 00:17:52.695 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.956 [2024-12-07 05:32:55.940636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.956 [2024-12-07 05:32:56.014525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.217 Running I/O for 10 seconds... 00:18:03.210 00:18:03.210 Latency(us) 00:18:03.210 [2024-12-07T04:33:06.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.210 [2024-12-07T04:33:06.450Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:03.210 Verification LBA range: start 0x0 length 0x1000 00:18:03.210 Nvme1n1 : 10.01 13074.65 102.15 0.00 0.00 9753.44 1133.23 18568.53 00:18:03.210 [2024-12-07T04:33:06.450Z] =================================================================================================================== 00:18:03.210 [2024-12-07T04:33:06.450Z] Total : 13074.65 102.15 0.00 0.00 9753.44 1133.23 18568.53 00:18:03.470 05:33:06 -- target/zcopy.sh@39 -- # perfpid=1814308 00:18:03.470 05:33:06 -- target/zcopy.sh@41 -- # xtrace_disable 00:18:03.470 05:33:06 -- common/autotest_common.sh@10 -- # set +x 00:18:03.471 05:33:06 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:03.471 05:33:06 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:03.471 05:33:06 -- nvmf/common.sh@520 -- # config=() 00:18:03.471 05:33:06 -- nvmf/common.sh@520 -- # local subsystem config 00:18:03.471 05:33:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:03.471 05:33:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:03.471 { 00:18:03.471 "params": { 00:18:03.471 "name": "Nvme$subsystem", 00:18:03.471 "trtype": "$TEST_TRANSPORT", 00:18:03.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:03.471 "adrfam": "ipv4", 00:18:03.471 "trsvcid": "$NVMF_PORT", 00:18:03.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:03.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:03.471 "hdgst": ${hdgst:-false}, 00:18:03.471 "ddgst": ${ddgst:-false} 00:18:03.471 }, 00:18:03.471 "method": "bdev_nvme_attach_controller" 00:18:03.471 } 00:18:03.471 EOF 00:18:03.471 )") 00:18:03.471 05:33:06 -- nvmf/common.sh@542 -- # cat 00:18:03.471 [2024-12-07 05:33:06.490353] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.471 [2024-12-07 05:33:06.490382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.471 05:33:06 -- nvmf/common.sh@544 -- # jq . 00:18:03.471 05:33:06 -- nvmf/common.sh@545 -- # IFS=, 00:18:03.471 05:33:06 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:03.471 "params": { 00:18:03.471 "name": "Nvme1", 00:18:03.471 "trtype": "tcp", 00:18:03.471 "traddr": "10.0.0.2", 00:18:03.471 "adrfam": "ipv4", 00:18:03.471 "trsvcid": "4420", 00:18:03.471 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.471 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:03.471 "hdgst": false, 00:18:03.471 "ddgst": false 00:18:03.471 }, 00:18:03.471 "method": "bdev_nvme_attach_controller" 00:18:03.471 }' 00:18:03.471 [2024-12-07 05:33:06.502351] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.471 [2024-12-07 05:33:06.502359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.471 [2024-12-07 05:33:06.514378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.471 [2024-12-07 05:33:06.514385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.471 [2024-12-07 05:33:06.525611] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:03.471 [2024-12-07 05:33:06.525657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1814308 ] 00:18:03.471 [2024-12-07 05:33:06.526407] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.471 [2024-12-07 05:33:06.526415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.471 [2024-12-07 05:33:06.538438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.471 [2024-12-07 05:33:06.538445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.471 [2024-12-07 05:33:06.550468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.471 [2024-12-07 05:33:06.550476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.471 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.471 [2024-12-07 05:33:06.562499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.471 [2024-12-07 05:33:06.562506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.471 [2024-12-07 05:33:06.574531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.471 [2024-12-07 05:33:06.574539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.471 [2024-12-07 05:33:06.585804] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.471 [2024-12-07 05:33:06.586562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.471 [2024-12-07 05:33:06.586569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.471 [2024-12-07 05:33:06.598593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.471 [2024-12-07 05:33:06.598602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.471 [2024-12-07 05:33:06.610623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.471 [2024-12-07 05:33:06.610633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.471 [2024-12-07 05:33:06.622657] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.471 [2024-12-07 05:33:06.622670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.471 [2024-12-07 05:33:06.634687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.471 [2024-12-07 05:33:06.634697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.471 [2024-12-07 05:33:06.646717] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.471 [2024-12-07 05:33:06.646730] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.471 [2024-12-07 05:33:06.647833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.471 [2024-12-07 05:33:06.658750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.471 [2024-12-07 05:33:06.658759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.471 [2024-12-07 05:33:06.670787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.471 [2024-12-07 05:33:06.670802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.471 [2024-12-07 05:33:06.682813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.471 [2024-12-07 05:33:06.682824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.471 [2024-12-07 05:33:06.694844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.471 [2024-12-07 05:33:06.694854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.471 [2024-12-07 05:33:06.706874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.471 [2024-12-07 05:33:06.706882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.732 [2024-12-07 05:33:06.718914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.732 [2024-12-07 05:33:06.718928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.732 [2024-12-07 05:33:06.730940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.732 [2024-12-07 05:33:06.730949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.732 [2024-12-07 05:33:06.742970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.732 [2024-12-07 05:33:06.742980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.732 [2024-12-07 05:33:06.755002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.732 [2024-12-07 05:33:06.755015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.732 [2024-12-07 05:33:06.767037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.732 [2024-12-07 05:33:06.767046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.732 [2024-12-07 05:33:06.779071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.732 [2024-12-07 05:33:06.779085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.732 [2024-12-07 05:33:06.791097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.732 [2024-12-07 05:33:06.791105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.732 Running I/O for 5 seconds... 00:18:03.732 [2024-12-07 05:33:06.806341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.732 [2024-12-07 05:33:06.806357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.732 [2024-12-07 05:33:06.819523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.732 [2024-12-07 05:33:06.819538] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.732 [2024-12-07 05:33:06.832494] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.732 [2024-12-07 05:33:06.832511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.732 [2024-12-07 05:33:06.845487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.732 [2024-12-07 05:33:06.845504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.732 [2024-12-07 05:33:06.858157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.732 [2024-12-07 05:33:06.858173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.732 [2024-12-07 05:33:06.870729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.732 [2024-12-07 05:33:06.870744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.732 [2024-12-07 05:33:06.883500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.732 [2024-12-07 05:33:06.883515] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.732 [2024-12-07 05:33:06.896228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.732 [2024-12-07 05:33:06.896243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.732 [2024-12-07 05:33:06.908750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.732 [2024-12-07 05:33:06.908764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.732 [2024-12-07 05:33:06.921475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.732 [2024-12-07 05:33:06.921489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.732 [2024-12-07 05:33:06.934396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.732 [2024-12-07 05:33:06.934410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.732 [2024-12-07 05:33:06.946814] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.732 [2024-12-07 05:33:06.946830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.732 [2024-12-07 05:33:06.959352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.732 [2024-12-07 05:33:06.959367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.993 [2024-12-07 05:33:06.972573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.993 [2024-12-07 05:33:06.972588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.993 [2024-12-07 05:33:06.985477] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.993 [2024-12-07 05:33:06.985491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.993 [2024-12-07 05:33:06.998360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.993 [2024-12-07 05:33:06.998375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.993 [2024-12-07 05:33:07.011494] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.993 [2024-12-07 05:33:07.011509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.993 [2024-12-07 05:33:07.024136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.993 [2024-12-07 05:33:07.024151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.993 [2024-12-07 05:33:07.037147] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.993 [2024-12-07 05:33:07.037162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.993 [2024-12-07 05:33:07.050054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.993 [2024-12-07 05:33:07.050069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.993 [2024-12-07 05:33:07.062674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.993 [2024-12-07 05:33:07.062689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.993 [2024-12-07 05:33:07.075487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.993 [2024-12-07 05:33:07.075502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.993 [2024-12-07 05:33:07.088652] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.993 [2024-12-07 05:33:07.088666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.993 [2024-12-07 05:33:07.101307] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.993 [2024-12-07 05:33:07.101322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.993 [2024-12-07 05:33:07.114311] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.993 [2024-12-07 05:33:07.114325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.993 [2024-12-07 05:33:07.127319] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.993 [2024-12-07 05:33:07.127334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.993 [2024-12-07 05:33:07.140294] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.993 [2024-12-07 05:33:07.140309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.993 [2024-12-07 05:33:07.152236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.993 [2024-12-07 05:33:07.152250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.993 [2024-12-07 05:33:07.165400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.993 [2024-12-07 05:33:07.165415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.993 [2024-12-07 05:33:07.178398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.993 [2024-12-07 05:33:07.178413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.993 [2024-12-07 05:33:07.191212] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.993 [2024-12-07 05:33:07.191227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.993 [2024-12-07 05:33:07.204281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.993 [2024-12-07 05:33:07.204296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.993 [2024-12-07 05:33:07.217385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.993 [2024-12-07 05:33:07.217400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.993 [2024-12-07 05:33:07.230468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.993 [2024-12-07 05:33:07.230482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.253 [2024-12-07 05:33:07.243403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.253 [2024-12-07 05:33:07.243418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.253 [2024-12-07 05:33:07.256648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.253 [2024-12-07 05:33:07.256662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.253 [2024-12-07 05:33:07.269529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.254 [2024-12-07 05:33:07.269543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.254 [2024-12-07 05:33:07.282201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.254 [2024-12-07 05:33:07.282215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.254 [2024-12-07 05:33:07.295592] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.254 [2024-12-07 05:33:07.295606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.254 [2024-12-07 05:33:07.308322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.254 [2024-12-07 05:33:07.308337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.254 [2024-12-07 05:33:07.321505] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.254 [2024-12-07 05:33:07.321520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.254 [2024-12-07 05:33:07.334174] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.254 [2024-12-07 05:33:07.334189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.254 [2024-12-07 05:33:07.346974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.254 [2024-12-07 05:33:07.346988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.254 [2024-12-07 05:33:07.359643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.254 [2024-12-07 05:33:07.359661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.254 [2024-12-07 05:33:07.372621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.254 [2024-12-07 05:33:07.372635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.254 [2024-12-07 05:33:07.385238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.254 [2024-12-07 05:33:07.385252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.254 [2024-12-07 05:33:07.398093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.254 [2024-12-07 05:33:07.398107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.254 [2024-12-07 05:33:07.411166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.254 [2024-12-07 05:33:07.411181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.254 [2024-12-07 05:33:07.423701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.254 [2024-12-07 05:33:07.423715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.254 [2024-12-07 05:33:07.436694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.254 [2024-12-07 05:33:07.436709] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.254 [2024-12-07 05:33:07.449838] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.254 [2024-12-07 05:33:07.449852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.254 [2024-12-07 05:33:07.463112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.254 [2024-12-07 05:33:07.463126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.254 [2024-12-07 05:33:07.475976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.254 [2024-12-07 05:33:07.475990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.254 [2024-12-07 05:33:07.489266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.254 [2024-12-07 05:33:07.489281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.515 [2024-12-07 05:33:07.501976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.515 [2024-12-07 05:33:07.501990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.515 [2024-12-07 05:33:07.514870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.515 [2024-12-07 05:33:07.514884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.515 [2024-12-07 05:33:07.527799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.515 [2024-12-07 05:33:07.527813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.515 [2024-12-07 05:33:07.540619] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.515 [2024-12-07 05:33:07.540633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.515 [2024-12-07 05:33:07.553283] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.515 [2024-12-07 05:33:07.553297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.515 [2024-12-07 05:33:07.566094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.515 [2024-12-07 05:33:07.566108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.515 [2024-12-07 05:33:07.579111] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.515 [2024-12-07 05:33:07.579125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.515 [2024-12-07 05:33:07.592057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.515 [2024-12-07 05:33:07.592071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.515 [2024-12-07 05:33:07.604998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.515 [2024-12-07 05:33:07.605020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.515 [2024-12-07 05:33:07.617655] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.515 [2024-12-07 05:33:07.617669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.515 [2024-12-07 05:33:07.630824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.515 [2024-12-07 05:33:07.630838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.515 [2024-12-07 05:33:07.643838] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.515 [2024-12-07 05:33:07.643852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.515 [2024-12-07 05:33:07.656670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.515 [2024-12-07 05:33:07.656684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.515 [2024-12-07 05:33:07.669355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.515 [2024-12-07 05:33:07.669369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.516 [2024-12-07 05:33:07.682141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.516 [2024-12-07 05:33:07.682156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.516 [2024-12-07 05:33:07.694903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.516 [2024-12-07 05:33:07.694917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.516 [2024-12-07 05:33:07.707504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.516 [2024-12-07 05:33:07.707519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.516 [2024-12-07 05:33:07.720296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.516 [2024-12-07 05:33:07.720310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.516 [2024-12-07 05:33:07.733303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.516 [2024-12-07 05:33:07.733317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.516 [2024-12-07 05:33:07.746212] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.516 [2024-12-07 05:33:07.746226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.777 [2024-12-07 05:33:07.758861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.777 [2024-12-07 05:33:07.758875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.777 [2024-12-07 05:33:07.771897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.777 [2024-12-07 05:33:07.771912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.777 [2024-12-07 05:33:07.784884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.777 [2024-12-07 05:33:07.784899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.777 [2024-12-07 05:33:07.797848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.777 [2024-12-07 05:33:07.797863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.777 [2024-12-07 05:33:07.810700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.777 [2024-12-07 05:33:07.810715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.777 [2024-12-07 05:33:07.823340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.777 [2024-12-07 05:33:07.823354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.777 [2024-12-07 05:33:07.836213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.777 [2024-12-07 05:33:07.836228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.777 [2024-12-07 05:33:07.848967] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.777 [2024-12-07 05:33:07.848986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.777 [2024-12-07 05:33:07.861817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.777 [2024-12-07 05:33:07.861831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.777 [2024-12-07 05:33:07.874277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.777 [2024-12-07 05:33:07.874292] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.777 [2024-12-07 05:33:07.887339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.777 [2024-12-07 05:33:07.887353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.777 [2024-12-07 05:33:07.900474] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.777 [2024-12-07 05:33:07.900489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.777 [2024-12-07 05:33:07.912924] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.777 [2024-12-07 05:33:07.912939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.777 [2024-12-07 05:33:07.925389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.777 [2024-12-07 05:33:07.925404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.777 [2024-12-07 05:33:07.938081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.777 [2024-12-07 05:33:07.938096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.777 [2024-12-07 05:33:07.950997] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.777 [2024-12-07 05:33:07.951016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.777 [2024-12-07 05:33:07.963757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.777 [2024-12-07 05:33:07.963772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.777 [2024-12-07 05:33:07.976576] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.777 [2024-12-07 05:33:07.976591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.777 [2024-12-07 05:33:07.989423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.777 [2024-12-07 05:33:07.989438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.777 [2024-12-07 05:33:08.002258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.777 [2024-12-07 05:33:08.002273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.777 [2024-12-07 05:33:08.015122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.777 [2024-12-07 05:33:08.015137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.040 [2024-12-07 05:33:08.027881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.040 [2024-12-07 05:33:08.027895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.040 [2024-12-07 05:33:08.040433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.040 [2024-12-07 05:33:08.040448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.040 [2024-12-07 05:33:08.053309] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.040 [2024-12-07 05:33:08.053325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.040 [2024-12-07 05:33:08.065961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.040 [2024-12-07 05:33:08.065975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.040 [2024-12-07 05:33:08.078709] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.040 [2024-12-07 05:33:08.078724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.040 [2024-12-07 05:33:08.091293] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.040 [2024-12-07 05:33:08.091311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.040 [2024-12-07 05:33:08.104409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.040 [2024-12-07 05:33:08.104424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.040 [2024-12-07 05:33:08.117303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.040 [2024-12-07 05:33:08.117318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.040 [2024-12-07 05:33:08.130248] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.040 [2024-12-07 05:33:08.130263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.040 [2024-12-07 05:33:08.143336] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.040 [2024-12-07 05:33:08.143352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.040 [2024-12-07 05:33:08.156165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.040 [2024-12-07 05:33:08.156180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.040 [2024-12-07 05:33:08.169116] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.040 [2024-12-07 05:33:08.169131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.040 [2024-12-07 05:33:08.182287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.040 [2024-12-07 05:33:08.182302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.040 [2024-12-07 05:33:08.194704] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.040 [2024-12-07 05:33:08.194719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.040 [2024-12-07 05:33:08.207533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.040 [2024-12-07 05:33:08.207549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.040 [2024-12-07 05:33:08.220278] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.040 [2024-12-07 05:33:08.220293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.040 [2024-12-07 05:33:08.233517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.040 [2024-12-07 05:33:08.233532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.040 [2024-12-07 05:33:08.246346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.040 [2024-12-07 05:33:08.246362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.040 [2024-12-07 05:33:08.259255] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.040 [2024-12-07 05:33:08.259270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.040 [2024-12-07 05:33:08.272327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.040 [2024-12-07 05:33:08.272343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.302 [2024-12-07 05:33:08.285193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.302 [2024-12-07 05:33:08.285208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.302 [2024-12-07 05:33:08.298023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.302 [2024-12-07 05:33:08.298038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.302 [2024-12-07 05:33:08.310971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.302 [2024-12-07 05:33:08.310986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.302 [2024-12-07 05:33:08.324162] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.302 [2024-12-07 05:33:08.324178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.302 [2024-12-07 05:33:08.337170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.302 [2024-12-07 05:33:08.337189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.302 [2024-12-07 05:33:08.350102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.302 [2024-12-07 05:33:08.350117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.302 [2024-12-07 05:33:08.362157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.302 [2024-12-07 05:33:08.362172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.302 [2024-12-07 05:33:08.375234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.302 [2024-12-07 05:33:08.375249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.302 [2024-12-07 05:33:08.387960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.302 [2024-12-07 05:33:08.387975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.302 [2024-12-07 05:33:08.400886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.302 [2024-12-07 05:33:08.400901] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.302 [2024-12-07 05:33:08.413913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.302 [2024-12-07 05:33:08.413928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.302 [2024-12-07 05:33:08.426344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.302 [2024-12-07 05:33:08.426359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.302 [2024-12-07 05:33:08.439709] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.302 [2024-12-07 05:33:08.439725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.302 [2024-12-07 05:33:08.452806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.302 [2024-12-07 05:33:08.452820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.302 [2024-12-07 05:33:08.466007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.302 [2024-12-07 05:33:08.466026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.302 [2024-12-07 05:33:08.478658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.302 [2024-12-07 05:33:08.478674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.302 [2024-12-07 05:33:08.491369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.302 [2024-12-07 05:33:08.491384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.302 [2024-12-07 05:33:08.504588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.302 [2024-12-07 05:33:08.504602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.302 [2024-12-07 05:33:08.517674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.302 [2024-12-07 05:33:08.517688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.302 [2024-12-07 05:33:08.530824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.302 [2024-12-07 05:33:08.530839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.563 [2024-12-07 05:33:08.543544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.563 [2024-12-07 05:33:08.543560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.563 [2024-12-07 05:33:08.556609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.563 [2024-12-07 05:33:08.556624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.563 [2024-12-07 05:33:08.569566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.563 [2024-12-07 05:33:08.569581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.563 [2024-12-07 05:33:08.582657] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.563 [2024-12-07 05:33:08.582672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.563 [2024-12-07 05:33:08.595466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.563 [2024-12-07 05:33:08.595480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.563 [2024-12-07 05:33:08.608020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.563 [2024-12-07 05:33:08.608035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.563 [2024-12-07 05:33:08.620771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.563 [2024-12-07 05:33:08.620785] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.563 [2024-12-07 05:33:08.633805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.563 [2024-12-07 05:33:08.633821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.563 [2024-12-07 05:33:08.646542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.563 [2024-12-07 05:33:08.646556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.563 [2024-12-07 05:33:08.659590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.563 [2024-12-07 05:33:08.659605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.563 [2024-12-07 05:33:08.672468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.563 [2024-12-07 05:33:08.672482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.563 [2024-12-07 05:33:08.685411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.563 [2024-12-07 05:33:08.685426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.563 [2024-12-07 05:33:08.698400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.563 [2024-12-07 05:33:08.698415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.563 [2024-12-07 05:33:08.710948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.563 [2024-12-07 05:33:08.710963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.563 [2024-12-07 05:33:08.723529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.563 [2024-12-07 05:33:08.723543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.563 [2024-12-07 05:33:08.735606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.563 [2024-12-07 05:33:08.735621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.563 [2024-12-07 05:33:08.748261] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.563 [2024-12-07 05:33:08.748276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.563 [2024-12-07 05:33:08.760562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.563 [2024-12-07 05:33:08.760577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.563 [2024-12-07 05:33:08.773464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.563 [2024-12-07 05:33:08.773479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.563 [2024-12-07 05:33:08.786549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.563 [2024-12-07 05:33:08.786564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.563 [2024-12-07 05:33:08.799532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.563 [2024-12-07 05:33:08.799547] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.825 [2024-12-07 05:33:08.812106] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.825 [2024-12-07 05:33:08.812120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.825 [2024-12-07 05:33:08.824080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.825 [2024-12-07 05:33:08.824095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.825 [2024-12-07 05:33:08.836790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.825 [2024-12-07 05:33:08.836804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.825 [2024-12-07 05:33:08.849101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.825 [2024-12-07 05:33:08.849115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.825 [2024-12-07 05:33:08.862003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.825 [2024-12-07 05:33:08.862022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.825 [2024-12-07 05:33:08.874561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.825 [2024-12-07 05:33:08.874576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.825 [2024-12-07 05:33:08.887474] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.825 [2024-12-07 05:33:08.887489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.825 [2024-12-07 05:33:08.900284] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.825 [2024-12-07 05:33:08.900298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.825 [2024-12-07 05:33:08.913093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.825 [2024-12-07 05:33:08.913108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.825 [2024-12-07 05:33:08.925633] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.825 [2024-12-07 05:33:08.925647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.825 [2024-12-07 05:33:08.938507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.825 [2024-12-07 05:33:08.938521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.825 [2024-12-07 05:33:08.951276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.825 [2024-12-07 05:33:08.951290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.825 [2024-12-07 05:33:08.963866] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.825 [2024-12-07 05:33:08.963882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.825 [2024-12-07 05:33:08.976705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.825 [2024-12-07 05:33:08.976720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.825 [2024-12-07 05:33:08.989384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.825 [2024-12-07 05:33:08.989399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.825 [2024-12-07 05:33:09.002300] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.825 [2024-12-07 05:33:09.002315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.825 [2024-12-07 05:33:09.015007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.825 [2024-12-07 05:33:09.015026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.825 [2024-12-07 05:33:09.027885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.825 [2024-12-07 05:33:09.027900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.825 [2024-12-07 05:33:09.040810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.825 [2024-12-07 05:33:09.040825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.825 [2024-12-07 05:33:09.053019] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.825 [2024-12-07 05:33:09.053034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.086 [2024-12-07 05:33:09.065693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.086 [2024-12-07 05:33:09.065708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.086 [2024-12-07 05:33:09.079075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.086 [2024-12-07 05:33:09.079090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.086 [2024-12-07 05:33:09.092313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.086 [2024-12-07 05:33:09.092328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.086 [2024-12-07 05:33:09.105260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.086 [2024-12-07 05:33:09.105275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.086 [2024-12-07 05:33:09.118574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.086 [2024-12-07 05:33:09.118589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.086 [2024-12-07 05:33:09.131437] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.086 [2024-12-07 05:33:09.131451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.086 [2024-12-07 05:33:09.143905] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.086 [2024-12-07 05:33:09.143920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.086 [2024-12-07 05:33:09.156700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.086 [2024-12-07 05:33:09.156715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.086 [2024-12-07 05:33:09.169398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.086 [2024-12-07 05:33:09.169412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.086 [2024-12-07 05:33:09.182186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.086 [2024-12-07 05:33:09.182201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.086 [2024-12-07 05:33:09.195033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.086 [2024-12-07 05:33:09.195047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.086 [2024-12-07 05:33:09.208362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.086 [2024-12-07 05:33:09.208377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.086 [2024-12-07 05:33:09.220915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.086 [2024-12-07 05:33:09.220930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.086 [2024-12-07 05:33:09.234001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.086 [2024-12-07 05:33:09.234020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.086 [2024-12-07 05:33:09.246626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.086 [2024-12-07 05:33:09.246641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.086 [2024-12-07 05:33:09.259323] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.086 [2024-12-07 05:33:09.259337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.086 [2024-12-07 05:33:09.271847] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.086 [2024-12-07 05:33:09.271861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.086 [2024-12-07 05:33:09.285386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.086 [2024-12-07 05:33:09.285401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.086 [2024-12-07 05:33:09.297909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.086 [2024-12-07 05:33:09.297927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.086 [2024-12-07 05:33:09.310862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.086 [2024-12-07 05:33:09.310876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.086 [2024-12-07 05:33:09.323355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.086 [2024-12-07 05:33:09.323370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.348 [2024-12-07 05:33:09.336258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.348 [2024-12-07 05:33:09.336273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.348 [2024-12-07 05:33:09.349344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.348 [2024-12-07 05:33:09.349359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.348 [2024-12-07 05:33:09.362171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.348 [2024-12-07 05:33:09.362185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.348 [2024-12-07 05:33:09.374649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.348 [2024-12-07 05:33:09.374663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.348 [2024-12-07 05:33:09.386957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.348 [2024-12-07 05:33:09.386971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.348 [2024-12-07 05:33:09.399658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.348 [2024-12-07 05:33:09.399672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.348 [2024-12-07 05:33:09.412188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.348 [2024-12-07 05:33:09.412202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.348 [2024-12-07 05:33:09.425215] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.348 [2024-12-07 05:33:09.425230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.348 [2024-12-07 05:33:09.438135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.348 [2024-12-07 05:33:09.438150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.348 [2024-12-07 05:33:09.450778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.348 [2024-12-07 05:33:09.450793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.348 [2024-12-07 05:33:09.464045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.348 [2024-12-07 05:33:09.464060] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.348 [2024-12-07 05:33:09.477006] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.348 [2024-12-07 05:33:09.477025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.348 [2024-12-07 05:33:09.489487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.348 [2024-12-07 05:33:09.489501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.348 [2024-12-07 05:33:09.502553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.348 [2024-12-07 05:33:09.502567] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.348 [2024-12-07 05:33:09.514863] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.348 [2024-12-07 05:33:09.514878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.348 [2024-12-07 05:33:09.527361] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.348 [2024-12-07 05:33:09.527375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.348 [2024-12-07 05:33:09.540096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.348 [2024-12-07 05:33:09.540115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.348 [2024-12-07 05:33:09.553329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.348 [2024-12-07 05:33:09.553344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.348 [2024-12-07 05:33:09.566151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.348 [2024-12-07 05:33:09.566166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.348 [2024-12-07 05:33:09.578648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.348 [2024-12-07 05:33:09.578662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.609 [2024-12-07 05:33:09.591529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.609 [2024-12-07 05:33:09.591544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.609 [2024-12-07 05:33:09.604438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.609 [2024-12-07 05:33:09.604452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.609 [2024-12-07 05:33:09.617233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.609 [2024-12-07 05:33:09.617247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.609 [2024-12-07 05:33:09.629850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.609 [2024-12-07 05:33:09.629864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.609 [2024-12-07 05:33:09.642806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.609 [2024-12-07 05:33:09.642821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.609 [2024-12-07 05:33:09.655948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.609 [2024-12-07 05:33:09.655964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.609 [2024-12-07 05:33:09.668842] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.609 [2024-12-07 05:33:09.668857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.609 [2024-12-07 05:33:09.681864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.609 [2024-12-07 05:33:09.681880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.609 [2024-12-07 05:33:09.694841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.609 [2024-12-07 05:33:09.694856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.609 [2024-12-07 05:33:09.707387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.609 [2024-12-07 05:33:09.707402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.609 [2024-12-07 05:33:09.720180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.609 [2024-12-07 05:33:09.720194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.609 [2024-12-07 05:33:09.732710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.609 [2024-12-07 05:33:09.732724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.609 [2024-12-07 05:33:09.745502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.609 [2024-12-07 05:33:09.745516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.609 [2024-12-07 05:33:09.758412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.609 [2024-12-07 05:33:09.758428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.609 [2024-12-07 05:33:09.771752] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.609 [2024-12-07 05:33:09.771767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.610 [2024-12-07 05:33:09.784695] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.610 [2024-12-07 05:33:09.784715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.610 [2024-12-07 05:33:09.797639] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.610 [2024-12-07 05:33:09.797654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.610 [2024-12-07 05:33:09.809903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.610 [2024-12-07 05:33:09.809918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.610 [2024-12-07 05:33:09.822885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.610 [2024-12-07 05:33:09.822899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.610 [2024-12-07 05:33:09.836299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.610 [2024-12-07 05:33:09.836314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.870 [2024-12-07 05:33:09.848893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.870 [2024-12-07 05:33:09.848908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.870 [2024-12-07 05:33:09.861515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.870 [2024-12-07 05:33:09.861530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.870 [2024-12-07 05:33:09.874327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.870 [2024-12-07 05:33:09.874342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.870 [2024-12-07 05:33:09.887313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.870 [2024-12-07 05:33:09.887328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.870 [2024-12-07 05:33:09.899597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.870 [2024-12-07 05:33:09.899612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.870 [2024-12-07 05:33:09.912930] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.870 [2024-12-07 05:33:09.912945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.870 [2024-12-07 05:33:09.925711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.870 [2024-12-07 05:33:09.925726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.870 [2024-12-07 05:33:09.938168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.870 [2024-12-07 05:33:09.938183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.870 [2024-12-07 05:33:09.951073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.870 [2024-12-07 05:33:09.951088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.870 [2024-12-07 05:33:09.963955] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.870 [2024-12-07 05:33:09.963970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.870 [2024-12-07 05:33:09.976669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.870 [2024-12-07 05:33:09.976684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.870 [2024-12-07 05:33:09.989411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.870 [2024-12-07 05:33:09.989425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.870 [2024-12-07 05:33:10.002380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.870 [2024-12-07 05:33:10.002397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.870 [2024-12-07 05:33:10.014774] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.870 [2024-12-07 05:33:10.014791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.870 [2024-12-07 05:33:10.027680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.870 [2024-12-07 05:33:10.027700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.870 [2024-12-07 05:33:10.040091] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.870 [2024-12-07 05:33:10.040106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.870 [2024-12-07 05:33:10.053025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.870 [2024-12-07 05:33:10.053042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.870 [2024-12-07 05:33:10.066727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.870 [2024-12-07 05:33:10.066747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.870 [2024-12-07 05:33:10.079918] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.870 [2024-12-07 05:33:10.079934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.870 [2024-12-07 05:33:10.093242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.870 [2024-12-07 05:33:10.093258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.870 [2024-12-07 05:33:10.105698] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.870 [2024-12-07 05:33:10.105713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.131 [2024-12-07 05:33:10.118696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.131 [2024-12-07 05:33:10.118711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.131 [2024-12-07 05:33:10.131607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.131 [2024-12-07 05:33:10.131622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.131 [2024-12-07 05:33:10.144492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.131 [2024-12-07 05:33:10.144507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.131 [2024-12-07 05:33:10.157582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.131 [2024-12-07 05:33:10.157598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.131 [2024-12-07 05:33:10.170467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.131 [2024-12-07 05:33:10.170482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.131 [2024-12-07 05:33:10.183211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.131 [2024-12-07 05:33:10.183226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.131 [2024-12-07 05:33:10.196089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.131 [2024-12-07 05:33:10.196104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.131 [2024-12-07 05:33:10.209022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.131 [2024-12-07 05:33:10.209038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.131 [2024-12-07 05:33:10.221823] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.131 [2024-12-07 05:33:10.221838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.131 [2024-12-07 05:33:10.234872] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.131 [2024-12-07 05:33:10.234887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.131 [2024-12-07 05:33:10.247854] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.131 [2024-12-07 05:33:10.247869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.131 [2024-12-07 05:33:10.260879] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.131 [2024-12-07 05:33:10.260894] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.131 [2024-12-07 05:33:10.273466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.131 [2024-12-07 05:33:10.273486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.131 [2024-12-07 05:33:10.286182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.131 [2024-12-07 05:33:10.286197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.131 [2024-12-07 05:33:10.299024] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.131 [2024-12-07 05:33:10.299039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.131 [2024-12-07 05:33:10.312147] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.131 [2024-12-07 05:33:10.312163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.131 [2024-12-07 05:33:10.324994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.131 [2024-12-07 05:33:10.325009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.131 [2024-12-07 05:33:10.337122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.131 [2024-12-07 05:33:10.337136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.131 [2024-12-07 05:33:10.350853] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.131 [2024-12-07 05:33:10.350868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.131 [2024-12-07 05:33:10.363459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.131 [2024-12-07 05:33:10.363473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.391 [2024-12-07 05:33:10.376330] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.391 [2024-12-07 05:33:10.376345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.391 [2024-12-07 05:33:10.389324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.391 [2024-12-07 05:33:10.389339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.391 [2024-12-07 05:33:10.402175] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.391 [2024-12-07 05:33:10.402190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.391 [2024-12-07 05:33:10.415146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.391 [2024-12-07 05:33:10.415160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.391 [2024-12-07 05:33:10.427979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.391 [2024-12-07 05:33:10.427993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.391 [2024-12-07 05:33:10.440714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.391 [2024-12-07 05:33:10.440728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.391 [2024-12-07 05:33:10.453599] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.391 [2024-12-07 05:33:10.453613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.391 [2024-12-07 05:33:10.466675] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.391 [2024-12-07 05:33:10.466690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.391 [2024-12-07 05:33:10.479068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.391 [2024-12-07 05:33:10.479084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.391 [2024-12-07 05:33:10.492048] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.391 [2024-12-07 05:33:10.492062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.391 [2024-12-07 05:33:10.504607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.391 [2024-12-07 05:33:10.504621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.391 [2024-12-07 05:33:10.516567] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.391 [2024-12-07 05:33:10.516582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.391 [2024-12-07 05:33:10.529455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.391 [2024-12-07 05:33:10.529469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.391 [2024-12-07 05:33:10.542340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.391 [2024-12-07 05:33:10.542354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.391 [2024-12-07 05:33:10.555088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.391 [2024-12-07 05:33:10.555102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.391 [2024-12-07 05:33:10.567689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.391 [2024-12-07 05:33:10.567704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.391 [2024-12-07 05:33:10.580565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.391 [2024-12-07 05:33:10.580579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.391 [2024-12-07 05:33:10.593842] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.391 [2024-12-07 05:33:10.593857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.391 [2024-12-07 05:33:10.606820] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.391 [2024-12-07 05:33:10.606834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.391 [2024-12-07 05:33:10.619473] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.392 [2024-12-07 05:33:10.619488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.652 [2024-12-07 05:33:10.631929] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.652 [2024-12-07 05:33:10.631944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.652 [2024-12-07 05:33:10.644670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.652 [2024-12-07 05:33:10.644685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.652 [2024-12-07 05:33:10.657563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.652 [2024-12-07 05:33:10.657577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.652 [2024-12-07 05:33:10.670267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.652 [2024-12-07 05:33:10.670282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.652 [2024-12-07 05:33:10.683074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.652 [2024-12-07 05:33:10.683088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.652 [2024-12-07 05:33:10.696355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.652 [2024-12-07 05:33:10.696369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.652 [2024-12-07 05:33:10.709313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.652 [2024-12-07 05:33:10.709327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.652 [2024-12-07 05:33:10.722176] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.652 [2024-12-07 05:33:10.722190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.652 [2024-12-07 05:33:10.734832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.652 [2024-12-07 05:33:10.734846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.652 [2024-12-07 05:33:10.748023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.652 [2024-12-07 05:33:10.748038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.652 [2024-12-07 05:33:10.761080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.652 [2024-12-07 05:33:10.761094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.652 [2024-12-07 05:33:10.773750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.652 [2024-12-07 05:33:10.773765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.652 [2024-12-07 05:33:10.786672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.652 [2024-12-07 05:33:10.786686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.652 [2024-12-07 05:33:10.799486] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.652 [2024-12-07 05:33:10.799500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.652 [2024-12-07 05:33:10.812411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.652 [2024-12-07 05:33:10.812425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.652 [2024-12-07 05:33:10.825496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.652 [2024-12-07 05:33:10.825511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.652 [2024-12-07 05:33:10.838193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.652 [2024-12-07 05:33:10.838207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.652 [2024-12-07 05:33:10.850906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.652 [2024-12-07 05:33:10.850920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.652 [2024-12-07 05:33:10.863674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.652 [2024-12-07 05:33:10.863689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.652 [2024-12-07 05:33:10.876322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.652 [2024-12-07 05:33:10.876336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.652 [2024-12-07 05:33:10.889182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.652 [2024-12-07 05:33:10.889197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.912 [2024-12-07 05:33:10.901838] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.912 [2024-12-07 05:33:10.901852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.912 [2024-12-07 05:33:10.914429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.912 [2024-12-07 05:33:10.914444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.912 [2024-12-07 05:33:10.927321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.912 [2024-12-07 05:33:10.927335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.912 [2024-12-07 05:33:10.939985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.912 [2024-12-07 05:33:10.939999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.912 [2024-12-07 05:33:10.952693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.912 [2024-12-07 05:33:10.952707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.912 [2024-12-07 05:33:10.965979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.912 [2024-12-07 05:33:10.965994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.912 [2024-12-07 05:33:10.978788] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.913 [2024-12-07 05:33:10.978802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.913 [2024-12-07 05:33:10.991610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.913 [2024-12-07 05:33:10.991624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.913 [2024-12-07 05:33:11.004745] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.913 [2024-12-07 05:33:11.004759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.913 [2024-12-07 05:33:11.016906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.913 [2024-12-07 05:33:11.016921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.913 [2024-12-07 05:33:11.029839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.913 [2024-12-07 05:33:11.029853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.913 [2024-12-07 05:33:11.042721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.913 [2024-12-07 05:33:11.042735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.913 [2024-12-07 05:33:11.055276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.913 [2024-12-07 05:33:11.055290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.913 [2024-12-07 05:33:11.068344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.913 [2024-12-07 05:33:11.068358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.913 [2024-12-07 05:33:11.080663] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.913 [2024-12-07 05:33:11.080677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.913 [2024-12-07 05:33:11.093858] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.913 [2024-12-07 05:33:11.093873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.913 [2024-12-07 05:33:11.106875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.913 [2024-12-07 05:33:11.106890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.913 [2024-12-07 05:33:11.119902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.913 [2024-12-07 05:33:11.119916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.913 [2024-12-07 05:33:11.132855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.913 [2024-12-07 05:33:11.132870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.913 [2024-12-07 05:33:11.145867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.913 [2024-12-07 05:33:11.145881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.173 [2024-12-07 05:33:11.158865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.173 [2024-12-07 05:33:11.158880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.173 [2024-12-07 05:33:11.171699] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.173 [2024-12-07 05:33:11.171713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.173 [2024-12-07 05:33:11.184650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.173 [2024-12-07 05:33:11.184665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.173 [2024-12-07 05:33:11.197189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.173 [2024-12-07 05:33:11.197203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.173 [2024-12-07 05:33:11.210296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.173 [2024-12-07 05:33:11.210311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.173 [2024-12-07 05:33:11.223062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.173 [2024-12-07 05:33:11.223077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.173 [2024-12-07 05:33:11.235731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.173 [2024-12-07 05:33:11.235745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.174 [2024-12-07 05:33:11.248393] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.174 [2024-12-07 05:33:11.248407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.174 [2024-12-07 05:33:11.260940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.174 [2024-12-07 05:33:11.260954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.174 [2024-12-07 05:33:11.273514] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.174 [2024-12-07 05:33:11.273529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.174 [2024-12-07 05:33:11.286322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.174 [2024-12-07 05:33:11.286337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.174 [2024-12-07 05:33:11.299329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.174 [2024-12-07 05:33:11.299344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.174 [2024-12-07 05:33:11.312171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.174 [2024-12-07 05:33:11.312185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.174 [2024-12-07 05:33:11.324899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.174 [2024-12-07 05:33:11.324914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.174 [2024-12-07 05:33:11.337765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.174 [2024-12-07 05:33:11.337779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.174 [2024-12-07 05:33:11.350848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.174 [2024-12-07 05:33:11.350862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.174 [2024-12-07 05:33:11.362677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.174 [2024-12-07 05:33:11.362692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.174 [2024-12-07 05:33:11.375826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.174 [2024-12-07 05:33:11.375841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.174 [2024-12-07 05:33:11.388896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.174 [2024-12-07 05:33:11.388911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.174 [2024-12-07 05:33:11.401691] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.174 [2024-12-07 05:33:11.401706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.435 [2024-12-07 05:33:11.414953] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.435 [2024-12-07 05:33:11.414967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.435 [2024-12-07 05:33:11.428040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.435 [2024-12-07 05:33:11.428055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.435 [2024-12-07 05:33:11.441182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.435 [2024-12-07 05:33:11.441196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.435 [2024-12-07 05:33:11.454258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.435 [2024-12-07 05:33:11.454273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.435 [2024-12-07 05:33:11.467134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.435 [2024-12-07 05:33:11.467149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.435 [2024-12-07 05:33:11.480116] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.435 [2024-12-07 05:33:11.480134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.435 [2024-12-07 05:33:11.493040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.435 [2024-12-07 05:33:11.493055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.435 [2024-12-07 05:33:11.505808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.435 [2024-12-07 05:33:11.505822] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.435 [2024-12-07 05:33:11.518810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.435 [2024-12-07 05:33:11.518824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.435 [2024-12-07 05:33:11.531738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.435 [2024-12-07 05:33:11.531753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.435 [2024-12-07 05:33:11.544802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.435 [2024-12-07 05:33:11.544817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.435 [2024-12-07 05:33:11.557782] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.435 [2024-12-07 05:33:11.557797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.435 [2024-12-07 05:33:11.570639] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.435 [2024-12-07 05:33:11.570654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.435 [2024-12-07 05:33:11.583351] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.435 [2024-12-07 05:33:11.583366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.435 [2024-12-07 05:33:11.596172] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.435 [2024-12-07 05:33:11.596187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.435 [2024-12-07 05:33:11.609163] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.435 [2024-12-07 05:33:11.609178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.435 [2024-12-07 05:33:11.621503] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.435 [2024-12-07 05:33:11.621518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.435 [2024-12-07 05:33:11.634649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.435 [2024-12-07 05:33:11.634664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.435 [2024-12-07 05:33:11.647580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.435 [2024-12-07 05:33:11.647595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.435 [2024-12-07 05:33:11.660309] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.435 [2024-12-07 05:33:11.660323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.435 [2024-12-07 05:33:11.672905] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.435 [2024-12-07 05:33:11.672920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.695 [2024-12-07 05:33:11.685989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.695 [2024-12-07 05:33:11.686004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.695 [2024-12-07 05:33:11.699245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.695 [2024-12-07 05:33:11.699259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.695 [2024-12-07 05:33:11.711665] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.695 [2024-12-07 05:33:11.711680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.695 [2024-12-07 05:33:11.724912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.695 [2024-12-07 05:33:11.724931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.695 [2024-12-07 05:33:11.737704] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.695 [2024-12-07 05:33:11.737718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.695 [2024-12-07 05:33:11.750826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.695 [2024-12-07 05:33:11.750840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.695 [2024-12-07 05:33:11.763835] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.695 [2024-12-07 05:33:11.763850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.695 [2024-12-07 05:33:11.776771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.695 [2024-12-07 05:33:11.776786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.695 [2024-12-07 05:33:11.789212] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.695 [2024-12-07 05:33:11.789227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.695 [2024-12-07 05:33:11.802156] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.695 [2024-12-07 05:33:11.802170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.695 00:18:08.695 Latency(us) 00:18:08.695 [2024-12-07T04:33:11.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.695 [2024-12-07T04:33:11.935Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:08.695 Nvme1n1 : 5.00 20259.62 158.28 0.00 0.00 6312.09 2512.21 12615.68 00:18:08.695 [2024-12-07T04:33:11.935Z] =================================================================================================================== 00:18:08.695 [2024-12-07T04:33:11.935Z] Total : 20259.62 158.28 0.00 0.00 6312.09 2512.21 12615.68 00:18:08.695 [2024-12-07 05:33:11.811867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.695 [2024-12-07 05:33:11.811881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.695 [2024-12-07 05:33:11.823896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.695 [2024-12-07 05:33:11.823908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.695 [2024-12-07 05:33:11.835935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.695 [2024-12-07 05:33:11.835951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.695 [2024-12-07 05:33:11.847961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.695 [2024-12-07 05:33:11.847973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.695 [2024-12-07 05:33:11.859989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.695 [2024-12-07 05:33:11.859999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.695 [2024-12-07 05:33:11.872020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.695 [2024-12-07 05:33:11.872030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.695 [2024-12-07 05:33:11.884049] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.695 [2024-12-07 05:33:11.884058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.695 [2024-12-07 05:33:11.896077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.695 [2024-12-07 05:33:11.896086] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.696 [2024-12-07 05:33:11.908111] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.696 [2024-12-07 05:33:11.908123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.696 [2024-12-07 05:33:11.920137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.696 [2024-12-07 05:33:11.920154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.696 [2024-12-07 05:33:11.932169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.696 [2024-12-07 05:33:11.932179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.956 [2024-12-07 05:33:11.944199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.956 [2024-12-07 05:33:11.944207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1814308) - No such process 00:18:08.956 05:33:11 -- target/zcopy.sh@49 -- # wait 1814308 00:18:08.956 05:33:11 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:08.956 05:33:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.956 05:33:11 -- common/autotest_common.sh@10 -- # set +x 00:18:08.956 05:33:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.956 05:33:11 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:08.956 05:33:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.956 05:33:11 -- common/autotest_common.sh@10 -- # set +x 00:18:08.956 delay0 00:18:08.956 05:33:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.956 05:33:11 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:08.956 05:33:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.956 05:33:11 -- common/autotest_common.sh@10 -- # set +x 00:18:08.956 05:33:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.956 05:33:11 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:08.956 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.956 [2024-12-07 05:33:12.045044] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:17.087 Initializing NVMe Controllers 00:18:17.087 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:17.087 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:17.087 Initialization complete. Launching workers. 00:18:17.087 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 233, failed: 31611 00:18:17.087 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 31708, failed to submit 136 00:18:17.087 success 31637, unsuccess 71, failed 0 00:18:17.087 05:33:19 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:17.087 05:33:19 -- target/zcopy.sh@60 -- # nvmftestfini 00:18:17.087 05:33:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:17.087 05:33:19 -- nvmf/common.sh@116 -- # sync 00:18:17.087 05:33:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:17.087 05:33:19 -- nvmf/common.sh@119 -- # set +e 00:18:17.087 05:33:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:17.087 05:33:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:17.087 rmmod nvme_tcp 00:18:17.087 rmmod nvme_fabrics 00:18:17.087 rmmod nvme_keyring 00:18:17.087 05:33:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:17.087 05:33:19 -- nvmf/common.sh@123 -- # set -e 00:18:17.087 05:33:19 -- nvmf/common.sh@124 -- # return 0 00:18:17.087 05:33:19 -- nvmf/common.sh@477 -- # '[' -n 1812034 ']' 00:18:17.087 05:33:19 -- nvmf/common.sh@478 -- # killprocess 1812034 00:18:17.087 05:33:19 -- common/autotest_common.sh@936 -- # '[' -z 1812034 ']' 00:18:17.087 05:33:19 -- common/autotest_common.sh@940 -- # kill -0 1812034 00:18:17.087 05:33:19 -- common/autotest_common.sh@941 -- # uname 00:18:17.087 05:33:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:17.087 05:33:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1812034 00:18:17.087 05:33:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:17.087 05:33:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:17.087 05:33:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1812034' 00:18:17.087 killing process with pid 1812034 00:18:17.087 05:33:19 -- common/autotest_common.sh@955 -- # kill 1812034 00:18:17.087 05:33:19 -- common/autotest_common.sh@960 -- # wait 1812034 00:18:17.087 05:33:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:17.087 05:33:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:17.087 05:33:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:17.087 05:33:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:17.087 05:33:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:17.087 05:33:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.087 05:33:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:17.087 05:33:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.469 05:33:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:18.469 00:18:18.469 real 0m33.977s 00:18:18.469 user 0m45.322s 00:18:18.469 sys 0m11.134s 00:18:18.469 05:33:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:18.469 05:33:21 -- common/autotest_common.sh@10 -- # set +x 00:18:18.469 ************************************ 00:18:18.469 END TEST nvmf_zcopy 00:18:18.469 ************************************ 00:18:18.469 05:33:21 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:18.469 05:33:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:18.469 05:33:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:18.469 05:33:21 -- common/autotest_common.sh@10 -- # set +x 00:18:18.469 ************************************ 00:18:18.469 START TEST nvmf_nmic 00:18:18.469 ************************************ 00:18:18.469 05:33:21 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:18.469 * Looking for test storage... 00:18:18.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:18.469 05:33:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:18.469 05:33:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:18.469 05:33:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:18.729 05:33:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:18.729 05:33:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:18.729 05:33:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:18.729 05:33:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:18.729 05:33:21 -- scripts/common.sh@335 -- # IFS=.-: 00:18:18.729 05:33:21 -- scripts/common.sh@335 -- # read -ra ver1 00:18:18.729 05:33:21 -- scripts/common.sh@336 -- # IFS=.-: 00:18:18.729 05:33:21 -- scripts/common.sh@336 -- # read -ra ver2 00:18:18.730 05:33:21 -- scripts/common.sh@337 -- # local 'op=<' 00:18:18.730 05:33:21 -- scripts/common.sh@339 -- # ver1_l=2 00:18:18.730 05:33:21 -- scripts/common.sh@340 -- # ver2_l=1 00:18:18.730 05:33:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:18.730 05:33:21 -- scripts/common.sh@343 -- # case "$op" in 00:18:18.730 05:33:21 -- scripts/common.sh@344 -- # : 1 00:18:18.730 05:33:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:18.730 05:33:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:18.730 05:33:21 -- scripts/common.sh@364 -- # decimal 1 00:18:18.730 05:33:21 -- scripts/common.sh@352 -- # local d=1 00:18:18.730 05:33:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:18.730 05:33:21 -- scripts/common.sh@354 -- # echo 1 00:18:18.730 05:33:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:18.730 05:33:21 -- scripts/common.sh@365 -- # decimal 2 00:18:18.730 05:33:21 -- scripts/common.sh@352 -- # local d=2 00:18:18.730 05:33:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:18.730 05:33:21 -- scripts/common.sh@354 -- # echo 2 00:18:18.730 05:33:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:18.730 05:33:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:18.730 05:33:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:18.730 05:33:21 -- scripts/common.sh@367 -- # return 0 00:18:18.730 05:33:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:18.730 05:33:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:18.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.730 --rc genhtml_branch_coverage=1 00:18:18.730 --rc genhtml_function_coverage=1 00:18:18.730 --rc genhtml_legend=1 00:18:18.730 --rc geninfo_all_blocks=1 00:18:18.730 --rc geninfo_unexecuted_blocks=1 00:18:18.730 00:18:18.730 ' 00:18:18.730 05:33:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:18.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.730 --rc genhtml_branch_coverage=1 00:18:18.730 --rc genhtml_function_coverage=1 00:18:18.730 --rc genhtml_legend=1 00:18:18.730 --rc geninfo_all_blocks=1 00:18:18.730 --rc geninfo_unexecuted_blocks=1 00:18:18.730 00:18:18.730 ' 00:18:18.730 05:33:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:18.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.730 --rc genhtml_branch_coverage=1 00:18:18.730 --rc genhtml_function_coverage=1 00:18:18.730 --rc genhtml_legend=1 00:18:18.730 --rc geninfo_all_blocks=1 00:18:18.730 --rc geninfo_unexecuted_blocks=1 00:18:18.730 00:18:18.730 ' 00:18:18.730 05:33:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:18.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.730 --rc genhtml_branch_coverage=1 00:18:18.730 --rc genhtml_function_coverage=1 00:18:18.730 --rc genhtml_legend=1 00:18:18.730 --rc geninfo_all_blocks=1 00:18:18.730 --rc geninfo_unexecuted_blocks=1 00:18:18.730 00:18:18.730 ' 00:18:18.730 05:33:21 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:18.730 05:33:21 -- nvmf/common.sh@7 -- # uname -s 00:18:18.730 05:33:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:18.730 05:33:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:18.730 05:33:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:18.730 05:33:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:18.730 05:33:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:18.730 05:33:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:18.730 05:33:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:18.730 05:33:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:18.730 05:33:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:18.730 05:33:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:18.730 05:33:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:18.730 05:33:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:18.730 05:33:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:18.730 05:33:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:18.730 05:33:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:18.730 05:33:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:18.730 05:33:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:18.730 05:33:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:18.730 05:33:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:18.730 05:33:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.730 05:33:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.730 05:33:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.730 05:33:21 -- paths/export.sh@5 -- # export PATH 00:18:18.730 05:33:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.730 05:33:21 -- nvmf/common.sh@46 -- # : 0 00:18:18.730 05:33:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:18.730 05:33:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:18.730 05:33:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:18.730 05:33:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:18.730 05:33:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:18.730 05:33:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:18.730 05:33:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:18.730 05:33:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:18.730 05:33:21 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:18.730 05:33:21 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:18.730 05:33:21 -- target/nmic.sh@14 -- # nvmftestinit 00:18:18.730 05:33:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:18.730 05:33:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:18.730 05:33:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:18.730 05:33:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:18.730 05:33:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:18.730 05:33:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.730 05:33:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:18.730 05:33:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.730 05:33:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:18.730 05:33:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:18.730 05:33:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:18.730 05:33:21 -- common/autotest_common.sh@10 -- # set +x 00:18:26.870 05:33:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:26.870 05:33:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:26.870 05:33:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:26.870 05:33:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:26.870 05:33:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:26.870 05:33:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:26.870 05:33:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:26.870 05:33:28 -- nvmf/common.sh@294 -- # net_devs=() 00:18:26.870 05:33:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:26.870 05:33:28 -- nvmf/common.sh@295 -- # e810=() 00:18:26.870 05:33:28 -- nvmf/common.sh@295 -- # local -ga e810 00:18:26.870 05:33:28 -- nvmf/common.sh@296 -- # x722=() 00:18:26.870 05:33:28 -- nvmf/common.sh@296 -- # local -ga x722 00:18:26.870 05:33:28 -- nvmf/common.sh@297 -- # mlx=() 00:18:26.870 05:33:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:26.871 05:33:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:26.871 05:33:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:26.871 05:33:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:26.871 05:33:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:26.871 05:33:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:26.871 05:33:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:26.871 05:33:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:26.871 05:33:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:26.871 05:33:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:26.871 05:33:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:26.871 05:33:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:26.871 05:33:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:26.871 05:33:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:26.871 05:33:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:26.871 05:33:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:26.871 05:33:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:26.871 05:33:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:26.871 05:33:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:26.871 05:33:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:26.871 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:26.871 05:33:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:26.871 05:33:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:26.871 05:33:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:26.871 05:33:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:26.871 05:33:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:26.871 05:33:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:26.871 05:33:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:26.871 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:26.871 05:33:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:26.871 05:33:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:26.871 05:33:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:26.871 05:33:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:26.871 05:33:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:26.871 05:33:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:26.871 05:33:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:26.871 05:33:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:26.871 05:33:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:26.871 05:33:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:26.871 05:33:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:26.871 05:33:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:26.871 05:33:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:26.871 Found net devices under 0000:31:00.0: cvl_0_0 00:18:26.871 05:33:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:26.871 05:33:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:26.871 05:33:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:26.871 05:33:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:26.871 05:33:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:26.871 05:33:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:26.871 Found net devices under 0000:31:00.1: cvl_0_1 00:18:26.871 05:33:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:26.871 05:33:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:26.871 05:33:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:26.871 05:33:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:26.871 05:33:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:26.871 05:33:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:26.871 05:33:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:26.871 05:33:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:26.871 05:33:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:26.871 05:33:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:26.871 05:33:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:26.871 05:33:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:26.871 05:33:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:26.871 05:33:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:26.871 05:33:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:26.871 05:33:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:26.871 05:33:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:26.871 05:33:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:26.871 05:33:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:26.871 05:33:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:26.871 05:33:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:26.871 05:33:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:26.871 05:33:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:26.871 05:33:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:26.871 05:33:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:26.871 05:33:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:26.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:26.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:18:26.871 00:18:26.871 --- 10.0.0.2 ping statistics --- 00:18:26.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.871 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:18:26.871 05:33:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:26.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:26.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.388 ms 00:18:26.871 00:18:26.871 --- 10.0.0.1 ping statistics --- 00:18:26.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.871 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:18:26.871 05:33:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:26.871 05:33:29 -- nvmf/common.sh@410 -- # return 0 00:18:26.871 05:33:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:26.871 05:33:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:26.871 05:33:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:26.871 05:33:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:26.871 05:33:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:26.871 05:33:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:26.871 05:33:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:26.871 05:33:29 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:26.871 05:33:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:26.871 05:33:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:26.871 05:33:29 -- common/autotest_common.sh@10 -- # set +x 00:18:26.871 05:33:29 -- nvmf/common.sh@469 -- # nvmfpid=1821580 00:18:26.871 05:33:29 -- nvmf/common.sh@470 -- # waitforlisten 1821580 00:18:26.871 05:33:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:26.871 05:33:29 -- common/autotest_common.sh@829 -- # '[' -z 1821580 ']' 00:18:26.871 05:33:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.871 05:33:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:26.871 05:33:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.871 05:33:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:26.871 05:33:29 -- common/autotest_common.sh@10 -- # set +x 00:18:26.871 [2024-12-07 05:33:29.327767] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:26.871 [2024-12-07 05:33:29.327831] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.871 EAL: No free 2048 kB hugepages reported on node 1 00:18:26.871 [2024-12-07 05:33:29.402689] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:26.871 [2024-12-07 05:33:29.477101] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:26.871 [2024-12-07 05:33:29.477237] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:26.872 [2024-12-07 05:33:29.477249] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:26.872 [2024-12-07 05:33:29.477258] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:26.872 [2024-12-07 05:33:29.477345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:26.872 [2024-12-07 05:33:29.477500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:26.872 [2024-12-07 05:33:29.477657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.872 [2024-12-07 05:33:29.477658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:27.133 05:33:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:27.133 05:33:30 -- common/autotest_common.sh@862 -- # return 0 00:18:27.133 05:33:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:27.133 05:33:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:27.133 05:33:30 -- common/autotest_common.sh@10 -- # set +x 00:18:27.133 05:33:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:27.133 05:33:30 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:27.133 05:33:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.133 05:33:30 -- common/autotest_common.sh@10 -- # set +x 00:18:27.133 [2024-12-07 05:33:30.171254] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:27.133 05:33:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.133 05:33:30 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:27.133 05:33:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.133 05:33:30 -- common/autotest_common.sh@10 -- # set +x 00:18:27.133 Malloc0 00:18:27.133 05:33:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.133 05:33:30 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:27.133 05:33:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.133 05:33:30 -- common/autotest_common.sh@10 -- # set +x 00:18:27.133 05:33:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.133 05:33:30 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:27.133 05:33:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.133 05:33:30 -- common/autotest_common.sh@10 -- # set +x 00:18:27.133 05:33:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.133 05:33:30 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:27.133 05:33:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.133 05:33:30 -- common/autotest_common.sh@10 -- # set +x 00:18:27.133 [2024-12-07 05:33:30.230664] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:27.133 05:33:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.133 05:33:30 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:27.133 test case1: single bdev can't be used in multiple subsystems 00:18:27.133 05:33:30 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:27.133 05:33:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.133 05:33:30 -- common/autotest_common.sh@10 -- # set +x 00:18:27.133 05:33:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.133 05:33:30 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:27.133 05:33:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.133 05:33:30 -- common/autotest_common.sh@10 -- # set +x 00:18:27.134 05:33:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.134 05:33:30 -- target/nmic.sh@28 -- # nmic_status=0 00:18:27.134 05:33:30 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:27.134 05:33:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.134 05:33:30 -- common/autotest_common.sh@10 -- # set +x 00:18:27.134 [2024-12-07 05:33:30.266596] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:27.134 [2024-12-07 05:33:30.266615] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:27.134 [2024-12-07 05:33:30.266623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.134 request: 00:18:27.134 { 00:18:27.134 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:27.134 "namespace": { 00:18:27.134 "bdev_name": "Malloc0" 00:18:27.134 }, 00:18:27.134 "method": "nvmf_subsystem_add_ns", 00:18:27.134 "req_id": 1 00:18:27.134 } 00:18:27.134 Got JSON-RPC error response 00:18:27.134 response: 00:18:27.134 { 00:18:27.134 "code": -32602, 00:18:27.134 "message": "Invalid parameters" 00:18:27.134 } 00:18:27.134 05:33:30 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:27.134 05:33:30 -- target/nmic.sh@29 -- # nmic_status=1 00:18:27.134 05:33:30 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:27.134 05:33:30 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:27.134 Adding namespace failed - expected result. 00:18:27.134 05:33:30 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:27.134 test case2: host connect to nvmf target in multiple paths 00:18:27.134 05:33:30 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:27.134 05:33:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.134 05:33:30 -- common/autotest_common.sh@10 -- # set +x 00:18:27.134 [2024-12-07 05:33:30.278742] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:27.134 05:33:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.134 05:33:30 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:29.044 05:33:31 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:30.424 05:33:33 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:30.424 05:33:33 -- common/autotest_common.sh@1187 -- # local i=0 00:18:30.424 05:33:33 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:30.424 05:33:33 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:30.424 05:33:33 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:32.337 05:33:35 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:32.337 05:33:35 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:32.337 05:33:35 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:18:32.337 05:33:35 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:32.337 05:33:35 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:32.337 05:33:35 -- common/autotest_common.sh@1197 -- # return 0 00:18:32.337 05:33:35 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:32.337 [global] 00:18:32.337 thread=1 00:18:32.337 invalidate=1 00:18:32.337 rw=write 00:18:32.337 time_based=1 00:18:32.337 runtime=1 00:18:32.337 ioengine=libaio 00:18:32.337 direct=1 00:18:32.337 bs=4096 00:18:32.337 iodepth=1 00:18:32.337 norandommap=0 00:18:32.337 numjobs=1 00:18:32.337 00:18:32.337 verify_dump=1 00:18:32.337 verify_backlog=512 00:18:32.337 verify_state_save=0 00:18:32.337 do_verify=1 00:18:32.337 verify=crc32c-intel 00:18:32.337 [job0] 00:18:32.337 filename=/dev/nvme0n1 00:18:32.337 Could not set queue depth (nvme0n1) 00:18:32.597 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:32.597 fio-3.35 00:18:32.597 Starting 1 thread 00:18:33.980 00:18:33.980 job0: (groupid=0, jobs=1): err= 0: pid=1822994: Sat Dec 7 05:33:36 2024 00:18:33.980 read: IOPS=17, BW=69.7KiB/s (71.4kB/s)(72.0KiB/1033msec) 00:18:33.980 slat (nsec): min=26447, max=27754, avg=27001.94, stdev=320.64 00:18:33.980 clat (usec): min=905, max=42010, avg=39674.89, stdev=9675.74 00:18:33.980 lat (usec): min=932, max=42037, avg=39701.89, stdev=9675.69 00:18:33.980 clat percentiles (usec): 00:18:33.980 | 1.00th=[ 906], 5.00th=[ 906], 10.00th=[41681], 20.00th=[41681], 00:18:33.980 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:18:33.980 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:33.980 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:33.980 | 99.99th=[42206] 00:18:33.980 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:18:33.980 slat (nsec): min=9141, max=85675, avg=30435.94, stdev=10165.58 00:18:33.980 clat (usec): min=245, max=810, avg=584.30, stdev=102.33 00:18:33.980 lat (usec): min=255, max=844, avg=614.74, stdev=106.54 00:18:33.980 clat percentiles (usec): 00:18:33.980 | 1.00th=[ 322], 5.00th=[ 412], 10.00th=[ 441], 20.00th=[ 506], 00:18:33.980 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 586], 60.00th=[ 611], 00:18:33.980 | 70.00th=[ 644], 80.00th=[ 676], 90.00th=[ 717], 95.00th=[ 742], 00:18:33.980 | 99.00th=[ 783], 99.50th=[ 791], 99.90th=[ 807], 99.95th=[ 807], 00:18:33.980 | 99.99th=[ 807] 00:18:33.980 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:33.980 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:33.980 lat (usec) : 250=0.19%, 500=18.30%, 750=74.72%, 1000=3.58% 00:18:33.980 lat (msec) : 50=3.21% 00:18:33.980 cpu : usr=0.97%, sys=1.94%, ctx=530, majf=0, minf=1 00:18:33.980 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:33.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:33.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:33.980 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:33.980 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:33.980 00:18:33.980 Run status group 0 (all jobs): 00:18:33.980 READ: bw=69.7KiB/s (71.4kB/s), 69.7KiB/s-69.7KiB/s (71.4kB/s-71.4kB/s), io=72.0KiB (73.7kB), run=1033-1033msec 00:18:33.980 WRITE: bw=1983KiB/s (2030kB/s), 1983KiB/s-1983KiB/s (2030kB/s-2030kB/s), io=2048KiB (2097kB), run=1033-1033msec 00:18:33.980 00:18:33.980 Disk stats (read/write): 00:18:33.980 nvme0n1: ios=64/512, merge=0/0, ticks=599/230, in_queue=829, util=93.19% 00:18:33.980 05:33:36 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:33.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:33.980 05:33:37 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:33.980 05:33:37 -- common/autotest_common.sh@1208 -- # local i=0 00:18:33.980 05:33:37 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:33.980 05:33:37 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:33.980 05:33:37 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:33.980 05:33:37 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:33.980 05:33:37 -- common/autotest_common.sh@1220 -- # return 0 00:18:33.980 05:33:37 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:33.980 05:33:37 -- target/nmic.sh@53 -- # nvmftestfini 00:18:33.980 05:33:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:33.980 05:33:37 -- nvmf/common.sh@116 -- # sync 00:18:33.980 05:33:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:33.980 05:33:37 -- nvmf/common.sh@119 -- # set +e 00:18:33.980 05:33:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:33.980 05:33:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:33.980 rmmod nvme_tcp 00:18:33.980 rmmod nvme_fabrics 00:18:33.980 rmmod nvme_keyring 00:18:33.980 05:33:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:33.980 05:33:37 -- nvmf/common.sh@123 -- # set -e 00:18:33.980 05:33:37 -- nvmf/common.sh@124 -- # return 0 00:18:33.980 05:33:37 -- nvmf/common.sh@477 -- # '[' -n 1821580 ']' 00:18:33.980 05:33:37 -- nvmf/common.sh@478 -- # killprocess 1821580 00:18:33.980 05:33:37 -- common/autotest_common.sh@936 -- # '[' -z 1821580 ']' 00:18:33.980 05:33:37 -- common/autotest_common.sh@940 -- # kill -0 1821580 00:18:33.980 05:33:37 -- common/autotest_common.sh@941 -- # uname 00:18:33.980 05:33:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:33.980 05:33:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1821580 00:18:34.241 05:33:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:34.241 05:33:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:34.241 05:33:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1821580' 00:18:34.241 killing process with pid 1821580 00:18:34.241 05:33:37 -- common/autotest_common.sh@955 -- # kill 1821580 00:18:34.241 05:33:37 -- common/autotest_common.sh@960 -- # wait 1821580 00:18:34.241 05:33:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:34.241 05:33:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:34.241 05:33:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:34.241 05:33:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:34.241 05:33:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:34.241 05:33:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.241 05:33:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:34.241 05:33:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.782 05:33:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:36.782 00:18:36.782 real 0m17.929s 00:18:36.782 user 0m50.377s 00:18:36.782 sys 0m6.464s 00:18:36.782 05:33:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:36.782 05:33:39 -- common/autotest_common.sh@10 -- # set +x 00:18:36.782 ************************************ 00:18:36.782 END TEST nvmf_nmic 00:18:36.782 ************************************ 00:18:36.782 05:33:39 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:36.782 05:33:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:36.782 05:33:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:36.782 05:33:39 -- common/autotest_common.sh@10 -- # set +x 00:18:36.782 ************************************ 00:18:36.782 START TEST nvmf_fio_target 00:18:36.782 ************************************ 00:18:36.782 05:33:39 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:36.782 * Looking for test storage... 00:18:36.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:36.782 05:33:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:36.782 05:33:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:36.782 05:33:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:36.782 05:33:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:36.782 05:33:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:36.782 05:33:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:36.782 05:33:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:36.782 05:33:39 -- scripts/common.sh@335 -- # IFS=.-: 00:18:36.782 05:33:39 -- scripts/common.sh@335 -- # read -ra ver1 00:18:36.782 05:33:39 -- scripts/common.sh@336 -- # IFS=.-: 00:18:36.782 05:33:39 -- scripts/common.sh@336 -- # read -ra ver2 00:18:36.782 05:33:39 -- scripts/common.sh@337 -- # local 'op=<' 00:18:36.782 05:33:39 -- scripts/common.sh@339 -- # ver1_l=2 00:18:36.782 05:33:39 -- scripts/common.sh@340 -- # ver2_l=1 00:18:36.782 05:33:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:36.782 05:33:39 -- scripts/common.sh@343 -- # case "$op" in 00:18:36.782 05:33:39 -- scripts/common.sh@344 -- # : 1 00:18:36.782 05:33:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:36.782 05:33:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:36.782 05:33:39 -- scripts/common.sh@364 -- # decimal 1 00:18:36.782 05:33:39 -- scripts/common.sh@352 -- # local d=1 00:18:36.782 05:33:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:36.782 05:33:39 -- scripts/common.sh@354 -- # echo 1 00:18:36.782 05:33:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:36.782 05:33:39 -- scripts/common.sh@365 -- # decimal 2 00:18:36.782 05:33:39 -- scripts/common.sh@352 -- # local d=2 00:18:36.782 05:33:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:36.782 05:33:39 -- scripts/common.sh@354 -- # echo 2 00:18:36.782 05:33:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:36.782 05:33:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:36.782 05:33:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:36.782 05:33:39 -- scripts/common.sh@367 -- # return 0 00:18:36.782 05:33:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:36.782 05:33:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:36.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.782 --rc genhtml_branch_coverage=1 00:18:36.782 --rc genhtml_function_coverage=1 00:18:36.782 --rc genhtml_legend=1 00:18:36.782 --rc geninfo_all_blocks=1 00:18:36.782 --rc geninfo_unexecuted_blocks=1 00:18:36.782 00:18:36.782 ' 00:18:36.782 05:33:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:36.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.782 --rc genhtml_branch_coverage=1 00:18:36.782 --rc genhtml_function_coverage=1 00:18:36.782 --rc genhtml_legend=1 00:18:36.782 --rc geninfo_all_blocks=1 00:18:36.782 --rc geninfo_unexecuted_blocks=1 00:18:36.782 00:18:36.782 ' 00:18:36.782 05:33:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:36.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.782 --rc genhtml_branch_coverage=1 00:18:36.782 --rc genhtml_function_coverage=1 00:18:36.782 --rc genhtml_legend=1 00:18:36.782 --rc geninfo_all_blocks=1 00:18:36.782 --rc geninfo_unexecuted_blocks=1 00:18:36.782 00:18:36.782 ' 00:18:36.782 05:33:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:36.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.782 --rc genhtml_branch_coverage=1 00:18:36.782 --rc genhtml_function_coverage=1 00:18:36.782 --rc genhtml_legend=1 00:18:36.782 --rc geninfo_all_blocks=1 00:18:36.782 --rc geninfo_unexecuted_blocks=1 00:18:36.782 00:18:36.782 ' 00:18:36.782 05:33:39 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:36.782 05:33:39 -- nvmf/common.sh@7 -- # uname -s 00:18:36.782 05:33:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:36.782 05:33:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:36.782 05:33:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:36.782 05:33:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:36.782 05:33:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:36.782 05:33:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:36.782 05:33:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:36.782 05:33:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:36.782 05:33:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:36.782 05:33:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:36.782 05:33:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:36.782 05:33:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:36.782 05:33:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:36.782 05:33:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:36.782 05:33:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:36.782 05:33:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:36.782 05:33:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:36.782 05:33:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:36.782 05:33:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:36.782 05:33:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.782 05:33:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.782 05:33:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.782 05:33:39 -- paths/export.sh@5 -- # export PATH 00:18:36.783 05:33:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.783 05:33:39 -- nvmf/common.sh@46 -- # : 0 00:18:36.783 05:33:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:36.783 05:33:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:36.783 05:33:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:36.783 05:33:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:36.783 05:33:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:36.783 05:33:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:36.783 05:33:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:36.783 05:33:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:36.783 05:33:39 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:36.783 05:33:39 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:36.783 05:33:39 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:36.783 05:33:39 -- target/fio.sh@16 -- # nvmftestinit 00:18:36.783 05:33:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:36.783 05:33:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:36.783 05:33:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:36.783 05:33:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:36.783 05:33:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:36.783 05:33:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.783 05:33:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:36.783 05:33:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.783 05:33:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:36.783 05:33:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:36.783 05:33:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:36.783 05:33:39 -- common/autotest_common.sh@10 -- # set +x 00:18:44.925 05:33:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:44.925 05:33:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:44.925 05:33:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:44.925 05:33:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:44.925 05:33:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:44.925 05:33:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:44.925 05:33:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:44.925 05:33:46 -- nvmf/common.sh@294 -- # net_devs=() 00:18:44.925 05:33:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:44.925 05:33:46 -- nvmf/common.sh@295 -- # e810=() 00:18:44.925 05:33:46 -- nvmf/common.sh@295 -- # local -ga e810 00:18:44.925 05:33:46 -- nvmf/common.sh@296 -- # x722=() 00:18:44.925 05:33:46 -- nvmf/common.sh@296 -- # local -ga x722 00:18:44.925 05:33:46 -- nvmf/common.sh@297 -- # mlx=() 00:18:44.925 05:33:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:44.925 05:33:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:44.925 05:33:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:44.925 05:33:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:44.925 05:33:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:44.925 05:33:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:44.925 05:33:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:44.925 05:33:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:44.925 05:33:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:44.925 05:33:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:44.925 05:33:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:44.925 05:33:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:44.925 05:33:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:44.925 05:33:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:44.925 05:33:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:44.925 05:33:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:44.925 05:33:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:44.925 05:33:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:44.925 05:33:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:44.925 05:33:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:44.925 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:44.925 05:33:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:44.925 05:33:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:44.925 05:33:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:44.925 05:33:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:44.925 05:33:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:44.925 05:33:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:44.925 05:33:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:44.925 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:44.925 05:33:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:44.925 05:33:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:44.925 05:33:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:44.925 05:33:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:44.925 05:33:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:44.925 05:33:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:44.925 05:33:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:44.925 05:33:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:44.925 05:33:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:44.925 05:33:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.925 05:33:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:44.925 05:33:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.925 05:33:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:44.925 Found net devices under 0000:31:00.0: cvl_0_0 00:18:44.925 05:33:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.925 05:33:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:44.925 05:33:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.925 05:33:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:44.925 05:33:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.925 05:33:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:44.925 Found net devices under 0000:31:00.1: cvl_0_1 00:18:44.925 05:33:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.925 05:33:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:44.925 05:33:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:44.925 05:33:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:44.925 05:33:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:44.925 05:33:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:44.925 05:33:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:44.925 05:33:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:44.925 05:33:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:44.925 05:33:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:44.925 05:33:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:44.925 05:33:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:44.925 05:33:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:44.925 05:33:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:44.925 05:33:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:44.925 05:33:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:44.925 05:33:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:44.925 05:33:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:44.925 05:33:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:44.925 05:33:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:44.925 05:33:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:44.925 05:33:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:44.925 05:33:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:44.925 05:33:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:44.925 05:33:47 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:44.925 05:33:47 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:44.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:44.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:18:44.925 00:18:44.925 --- 10.0.0.2 ping statistics --- 00:18:44.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.925 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:18:44.925 05:33:47 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:44.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:44.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:18:44.925 00:18:44.925 --- 10.0.0.1 ping statistics --- 00:18:44.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.925 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:18:44.925 05:33:47 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:44.925 05:33:47 -- nvmf/common.sh@410 -- # return 0 00:18:44.925 05:33:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:44.925 05:33:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:44.925 05:33:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:44.925 05:33:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:44.925 05:33:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:44.925 05:33:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:44.925 05:33:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:44.925 05:33:47 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:44.925 05:33:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:44.925 05:33:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:44.925 05:33:47 -- common/autotest_common.sh@10 -- # set +x 00:18:44.925 05:33:47 -- nvmf/common.sh@469 -- # nvmfpid=1827725 00:18:44.925 05:33:47 -- nvmf/common.sh@470 -- # waitforlisten 1827725 00:18:44.925 05:33:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:44.925 05:33:47 -- common/autotest_common.sh@829 -- # '[' -z 1827725 ']' 00:18:44.925 05:33:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.925 05:33:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:44.925 05:33:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.925 05:33:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:44.925 05:33:47 -- common/autotest_common.sh@10 -- # set +x 00:18:44.925 [2024-12-07 05:33:47.303036] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:44.926 [2024-12-07 05:33:47.303105] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.926 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.926 [2024-12-07 05:33:47.377128] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:44.926 [2024-12-07 05:33:47.449697] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:44.926 [2024-12-07 05:33:47.449832] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.926 [2024-12-07 05:33:47.449842] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.926 [2024-12-07 05:33:47.449851] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.926 [2024-12-07 05:33:47.450033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.926 [2024-12-07 05:33:47.450125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:44.926 [2024-12-07 05:33:47.450384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:44.926 [2024-12-07 05:33:47.450387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.926 05:33:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:44.926 05:33:48 -- common/autotest_common.sh@862 -- # return 0 00:18:44.926 05:33:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:44.926 05:33:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:44.926 05:33:48 -- common/autotest_common.sh@10 -- # set +x 00:18:44.926 05:33:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.926 05:33:48 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:45.187 [2024-12-07 05:33:48.278722] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.187 05:33:48 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:45.457 05:33:48 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:45.457 05:33:48 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:45.457 05:33:48 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:45.457 05:33:48 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:45.719 05:33:48 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:45.719 05:33:48 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:45.978 05:33:49 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:45.978 05:33:49 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:46.238 05:33:49 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:46.238 05:33:49 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:46.238 05:33:49 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:46.497 05:33:49 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:46.497 05:33:49 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:46.757 05:33:49 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:46.757 05:33:49 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:46.757 05:33:49 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:47.017 05:33:50 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:47.018 05:33:50 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:47.278 05:33:50 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:47.278 05:33:50 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:47.278 05:33:50 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:47.538 [2024-12-07 05:33:50.588562] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:47.538 05:33:50 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:47.798 05:33:50 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:47.799 05:33:50 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:49.710 05:33:52 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:49.710 05:33:52 -- common/autotest_common.sh@1187 -- # local i=0 00:18:49.710 05:33:52 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:49.710 05:33:52 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:18:49.710 05:33:52 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:18:49.710 05:33:52 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:51.206 05:33:54 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:51.206 05:33:54 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:51.206 05:33:54 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:18:51.466 05:33:54 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:18:51.466 05:33:54 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:51.466 05:33:54 -- common/autotest_common.sh@1197 -- # return 0 00:18:51.466 05:33:54 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:51.466 [global] 00:18:51.466 thread=1 00:18:51.466 invalidate=1 00:18:51.466 rw=write 00:18:51.466 time_based=1 00:18:51.466 runtime=1 00:18:51.466 ioengine=libaio 00:18:51.466 direct=1 00:18:51.466 bs=4096 00:18:51.466 iodepth=1 00:18:51.466 norandommap=0 00:18:51.466 numjobs=1 00:18:51.466 00:18:51.466 verify_dump=1 00:18:51.466 verify_backlog=512 00:18:51.466 verify_state_save=0 00:18:51.466 do_verify=1 00:18:51.466 verify=crc32c-intel 00:18:51.466 [job0] 00:18:51.466 filename=/dev/nvme0n1 00:18:51.466 [job1] 00:18:51.466 filename=/dev/nvme0n2 00:18:51.466 [job2] 00:18:51.466 filename=/dev/nvme0n3 00:18:51.466 [job3] 00:18:51.466 filename=/dev/nvme0n4 00:18:51.466 Could not set queue depth (nvme0n1) 00:18:51.466 Could not set queue depth (nvme0n2) 00:18:51.466 Could not set queue depth (nvme0n3) 00:18:51.466 Could not set queue depth (nvme0n4) 00:18:51.726 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:51.726 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:51.726 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:51.726 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:51.726 fio-3.35 00:18:51.726 Starting 4 threads 00:18:53.109 00:18:53.109 job0: (groupid=0, jobs=1): err= 0: pid=1829361: Sat Dec 7 05:33:56 2024 00:18:53.109 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:18:53.109 slat (nsec): min=7072, max=65514, avg=27128.34, stdev=4781.55 00:18:53.109 clat (usec): min=364, max=1304, avg=987.11, stdev=112.68 00:18:53.109 lat (usec): min=392, max=1331, avg=1014.23, stdev=113.44 00:18:53.109 clat percentiles (usec): 00:18:53.109 | 1.00th=[ 578], 5.00th=[ 758], 10.00th=[ 873], 20.00th=[ 930], 00:18:53.109 | 30.00th=[ 963], 40.00th=[ 988], 50.00th=[ 1004], 60.00th=[ 1029], 00:18:53.109 | 70.00th=[ 1045], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1123], 00:18:53.109 | 99.00th=[ 1205], 99.50th=[ 1205], 99.90th=[ 1303], 99.95th=[ 1303], 00:18:53.109 | 99.99th=[ 1303] 00:18:53.109 write: IOPS=736, BW=2945KiB/s (3016kB/s)(2948KiB/1001msec); 0 zone resets 00:18:53.109 slat (nsec): min=9721, max=53981, avg=32315.21, stdev=9075.59 00:18:53.109 clat (usec): min=283, max=1068, avg=606.92, stdev=122.05 00:18:53.109 lat (usec): min=294, max=1105, avg=639.24, stdev=124.74 00:18:53.109 clat percentiles (usec): 00:18:53.109 | 1.00th=[ 326], 5.00th=[ 416], 10.00th=[ 453], 20.00th=[ 506], 00:18:53.109 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 635], 00:18:53.109 | 70.00th=[ 676], 80.00th=[ 701], 90.00th=[ 758], 95.00th=[ 799], 00:18:53.109 | 99.00th=[ 914], 99.50th=[ 1029], 99.90th=[ 1074], 99.95th=[ 1074], 00:18:53.109 | 99.99th=[ 1074] 00:18:53.109 bw ( KiB/s): min= 4096, max= 4096, per=41.60%, avg=4096.00, stdev= 0.00, samples=1 00:18:53.109 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:53.109 lat (usec) : 500=11.45%, 750=43.07%, 1000=23.78% 00:18:53.109 lat (msec) : 2=21.70% 00:18:53.109 cpu : usr=2.40%, sys=5.20%, ctx=1251, majf=0, minf=1 00:18:53.109 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:53.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.109 issued rwts: total=512,737,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.109 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:53.109 job1: (groupid=0, jobs=1): err= 0: pid=1829362: Sat Dec 7 05:33:56 2024 00:18:53.109 read: IOPS=16, BW=67.1KiB/s (68.7kB/s)(68.0KiB/1014msec) 00:18:53.109 slat (nsec): min=27013, max=28576, avg=27737.06, stdev=377.66 00:18:53.109 clat (usec): min=1075, max=42105, avg=39128.18, stdev=9816.72 00:18:53.109 lat (usec): min=1103, max=42133, avg=39155.92, stdev=9816.73 00:18:53.109 clat percentiles (usec): 00:18:53.109 | 1.00th=[ 1074], 5.00th=[ 1074], 10.00th=[40633], 20.00th=[41157], 00:18:53.109 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:18:53.109 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:18:53.109 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:53.109 | 99.99th=[42206] 00:18:53.109 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:18:53.109 slat (nsec): min=9647, max=70401, avg=33997.53, stdev=8764.15 00:18:53.109 clat (usec): min=244, max=1005, avg=637.82, stdev=131.51 00:18:53.109 lat (usec): min=279, max=1041, avg=671.81, stdev=134.10 00:18:53.109 clat percentiles (usec): 00:18:53.109 | 1.00th=[ 289], 5.00th=[ 392], 10.00th=[ 457], 20.00th=[ 545], 00:18:53.109 | 30.00th=[ 578], 40.00th=[ 611], 50.00th=[ 644], 60.00th=[ 676], 00:18:53.109 | 70.00th=[ 709], 80.00th=[ 742], 90.00th=[ 791], 95.00th=[ 840], 00:18:53.109 | 99.00th=[ 938], 99.50th=[ 988], 99.90th=[ 1004], 99.95th=[ 1004], 00:18:53.109 | 99.99th=[ 1004] 00:18:53.109 bw ( KiB/s): min= 4096, max= 4096, per=41.60%, avg=4096.00, stdev= 0.00, samples=1 00:18:53.109 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:53.109 lat (usec) : 250=0.19%, 500=14.18%, 750=65.60%, 1000=16.64% 00:18:53.109 lat (msec) : 2=0.38%, 50=3.02% 00:18:53.109 cpu : usr=0.89%, sys=2.37%, ctx=530, majf=0, minf=1 00:18:53.109 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:53.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.109 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.109 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:53.109 job2: (groupid=0, jobs=1): err= 0: pid=1829363: Sat Dec 7 05:33:56 2024 00:18:53.109 read: IOPS=380, BW=1524KiB/s (1560kB/s)(1568KiB/1029msec) 00:18:53.109 slat (nsec): min=21253, max=45135, avg=26986.69, stdev=1949.81 00:18:53.109 clat (usec): min=464, max=42011, avg=1714.23, stdev=5408.10 00:18:53.109 lat (usec): min=491, max=42037, avg=1741.22, stdev=5408.06 00:18:53.109 clat percentiles (usec): 00:18:53.109 | 1.00th=[ 619], 5.00th=[ 807], 10.00th=[ 865], 20.00th=[ 930], 00:18:53.109 | 30.00th=[ 963], 40.00th=[ 979], 50.00th=[ 1004], 60.00th=[ 1020], 00:18:53.109 | 70.00th=[ 1045], 80.00th=[ 1057], 90.00th=[ 1106], 95.00th=[ 1139], 00:18:53.109 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:53.109 | 99.99th=[42206] 00:18:53.109 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:18:53.109 slat (nsec): min=10109, max=66135, avg=32641.40, stdev=8410.35 00:18:53.109 clat (usec): min=252, max=1013, avg=627.08, stdev=122.92 00:18:53.109 lat (usec): min=274, max=1047, avg=659.72, stdev=124.95 00:18:53.109 clat percentiles (usec): 00:18:53.109 | 1.00th=[ 359], 5.00th=[ 412], 10.00th=[ 478], 20.00th=[ 529], 00:18:53.109 | 30.00th=[ 562], 40.00th=[ 603], 50.00th=[ 635], 60.00th=[ 668], 00:18:53.109 | 70.00th=[ 701], 80.00th=[ 725], 90.00th=[ 766], 95.00th=[ 816], 00:18:53.109 | 99.00th=[ 930], 99.50th=[ 979], 99.90th=[ 1012], 99.95th=[ 1012], 00:18:53.109 | 99.99th=[ 1012] 00:18:53.109 bw ( KiB/s): min= 4096, max= 4096, per=41.60%, avg=4096.00, stdev= 0.00, samples=1 00:18:53.109 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:53.109 lat (usec) : 500=8.96%, 750=41.48%, 1000=26.66% 00:18:53.109 lat (msec) : 2=22.12%, 50=0.77% 00:18:53.109 cpu : usr=1.95%, sys=2.14%, ctx=905, majf=0, minf=1 00:18:53.109 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:53.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.109 issued rwts: total=392,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.109 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:53.109 job3: (groupid=0, jobs=1): err= 0: pid=1829364: Sat Dec 7 05:33:56 2024 00:18:53.109 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:18:53.109 slat (nsec): min=27174, max=47015, avg=27834.63, stdev=1583.72 00:18:53.109 clat (usec): min=424, max=1234, avg=979.42, stdev=91.74 00:18:53.109 lat (usec): min=452, max=1262, avg=1007.26, stdev=91.64 00:18:53.109 clat percentiles (usec): 00:18:53.109 | 1.00th=[ 717], 5.00th=[ 807], 10.00th=[ 865], 20.00th=[ 922], 00:18:53.109 | 30.00th=[ 955], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 1004], 00:18:53.109 | 70.00th=[ 1020], 80.00th=[ 1045], 90.00th=[ 1074], 95.00th=[ 1106], 00:18:53.109 | 99.00th=[ 1172], 99.50th=[ 1205], 99.90th=[ 1237], 99.95th=[ 1237], 00:18:53.109 | 99.99th=[ 1237] 00:18:53.109 write: IOPS=771, BW=3085KiB/s (3159kB/s)(3088KiB/1001msec); 0 zone resets 00:18:53.109 slat (nsec): min=3222, max=69534, avg=29198.30, stdev=11509.70 00:18:53.109 clat (usec): min=225, max=1148, avg=586.07, stdev=126.26 00:18:53.109 lat (usec): min=233, max=1182, avg=615.27, stdev=130.94 00:18:53.109 clat percentiles (usec): 00:18:53.109 | 1.00th=[ 285], 5.00th=[ 363], 10.00th=[ 424], 20.00th=[ 486], 00:18:53.109 | 30.00th=[ 529], 40.00th=[ 553], 50.00th=[ 586], 60.00th=[ 619], 00:18:53.109 | 70.00th=[ 652], 80.00th=[ 693], 90.00th=[ 742], 95.00th=[ 783], 00:18:53.109 | 99.00th=[ 914], 99.50th=[ 1004], 99.90th=[ 1156], 99.95th=[ 1156], 00:18:53.109 | 99.99th=[ 1156] 00:18:53.109 bw ( KiB/s): min= 4096, max= 4096, per=41.60%, avg=4096.00, stdev= 0.00, samples=1 00:18:53.109 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:53.109 lat (usec) : 250=0.23%, 500=13.55%, 750=41.98%, 1000=26.48% 00:18:53.109 lat (msec) : 2=17.76% 00:18:53.109 cpu : usr=3.30%, sys=4.20%, ctx=1285, majf=0, minf=1 00:18:53.109 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:53.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.109 issued rwts: total=512,772,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.109 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:53.109 00:18:53.109 Run status group 0 (all jobs): 00:18:53.109 READ: bw=5570KiB/s (5704kB/s), 67.1KiB/s-2046KiB/s (68.7kB/s-2095kB/s), io=5732KiB (5870kB), run=1001-1029msec 00:18:53.109 WRITE: bw=9846KiB/s (10.1MB/s), 1990KiB/s-3085KiB/s (2038kB/s-3159kB/s), io=9.89MiB (10.4MB), run=1001-1029msec 00:18:53.109 00:18:53.109 Disk stats (read/write): 00:18:53.109 nvme0n1: ios=532/512, merge=0/0, ticks=695/249, in_queue=944, util=83.97% 00:18:53.109 nvme0n2: ios=61/512, merge=0/0, ticks=840/249, in_queue=1089, util=87.84% 00:18:53.109 nvme0n3: ios=450/512, merge=0/0, ticks=595/306, in_queue=901, util=95.56% 00:18:53.109 nvme0n4: ios=567/512, merge=0/0, ticks=1282/232, in_queue=1514, util=97.64% 00:18:53.109 05:33:56 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:53.109 [global] 00:18:53.109 thread=1 00:18:53.109 invalidate=1 00:18:53.109 rw=randwrite 00:18:53.109 time_based=1 00:18:53.109 runtime=1 00:18:53.109 ioengine=libaio 00:18:53.109 direct=1 00:18:53.110 bs=4096 00:18:53.110 iodepth=1 00:18:53.110 norandommap=0 00:18:53.110 numjobs=1 00:18:53.110 00:18:53.110 verify_dump=1 00:18:53.110 verify_backlog=512 00:18:53.110 verify_state_save=0 00:18:53.110 do_verify=1 00:18:53.110 verify=crc32c-intel 00:18:53.110 [job0] 00:18:53.110 filename=/dev/nvme0n1 00:18:53.110 [job1] 00:18:53.110 filename=/dev/nvme0n2 00:18:53.110 [job2] 00:18:53.110 filename=/dev/nvme0n3 00:18:53.110 [job3] 00:18:53.110 filename=/dev/nvme0n4 00:18:53.110 Could not set queue depth (nvme0n1) 00:18:53.110 Could not set queue depth (nvme0n2) 00:18:53.110 Could not set queue depth (nvme0n3) 00:18:53.110 Could not set queue depth (nvme0n4) 00:18:53.369 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:53.369 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:53.369 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:53.369 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:53.369 fio-3.35 00:18:53.369 Starting 4 threads 00:18:54.754 00:18:54.754 job0: (groupid=0, jobs=1): err= 0: pid=1829892: Sat Dec 7 05:33:57 2024 00:18:54.754 read: IOPS=309, BW=1238KiB/s (1268kB/s)(1244KiB/1005msec) 00:18:54.754 slat (nsec): min=6716, max=44183, avg=25481.54, stdev=4001.53 00:18:54.754 clat (usec): min=513, max=42112, avg=2224.24, stdev=7170.16 00:18:54.754 lat (usec): min=540, max=42138, avg=2249.72, stdev=7170.36 00:18:54.754 clat percentiles (usec): 00:18:54.754 | 1.00th=[ 570], 5.00th=[ 676], 10.00th=[ 758], 20.00th=[ 832], 00:18:54.754 | 30.00th=[ 873], 40.00th=[ 906], 50.00th=[ 947], 60.00th=[ 971], 00:18:54.754 | 70.00th=[ 996], 80.00th=[ 1029], 90.00th=[ 1074], 95.00th=[ 1123], 00:18:54.754 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:54.754 | 99.99th=[42206] 00:18:54.754 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:18:54.754 slat (nsec): min=8792, max=51790, avg=28301.53, stdev=9594.74 00:18:54.754 clat (usec): min=158, max=993, avg=555.55, stdev=160.74 00:18:54.754 lat (usec): min=173, max=1025, avg=583.86, stdev=164.82 00:18:54.754 clat percentiles (usec): 00:18:54.754 | 1.00th=[ 186], 5.00th=[ 265], 10.00th=[ 322], 20.00th=[ 429], 00:18:54.754 | 30.00th=[ 478], 40.00th=[ 519], 50.00th=[ 570], 60.00th=[ 611], 00:18:54.754 | 70.00th=[ 652], 80.00th=[ 701], 90.00th=[ 750], 95.00th=[ 799], 00:18:54.754 | 99.00th=[ 906], 99.50th=[ 922], 99.90th=[ 996], 99.95th=[ 996], 00:18:54.754 | 99.99th=[ 996] 00:18:54.754 bw ( KiB/s): min= 4087, max= 4087, per=40.03%, avg=4087.00, stdev= 0.00, samples=1 00:18:54.754 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:18:54.754 lat (usec) : 250=2.79%, 500=19.32%, 750=38.03%, 1000=29.16% 00:18:54.754 lat (msec) : 2=9.48%, 50=1.22% 00:18:54.754 cpu : usr=1.69%, sys=2.99%, ctx=824, majf=0, minf=1 00:18:54.754 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:54.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.754 issued rwts: total=311,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.754 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:54.754 job1: (groupid=0, jobs=1): err= 0: pid=1829893: Sat Dec 7 05:33:57 2024 00:18:54.754 read: IOPS=434, BW=1738KiB/s (1780kB/s)(1740KiB/1001msec) 00:18:54.754 slat (nsec): min=6761, max=82092, avg=26992.48, stdev=6081.87 00:18:54.754 clat (usec): min=546, max=41626, avg=1559.32, stdev=4708.84 00:18:54.754 lat (usec): min=554, max=41654, avg=1586.31, stdev=4708.55 00:18:54.754 clat percentiles (usec): 00:18:54.754 | 1.00th=[ 685], 5.00th=[ 742], 10.00th=[ 775], 20.00th=[ 816], 00:18:54.754 | 30.00th=[ 857], 40.00th=[ 955], 50.00th=[ 1074], 60.00th=[ 1106], 00:18:54.754 | 70.00th=[ 1139], 80.00th=[ 1172], 90.00th=[ 1205], 95.00th=[ 1254], 00:18:54.754 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:18:54.754 | 99.99th=[41681] 00:18:54.754 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:18:54.754 slat (nsec): min=9353, max=54135, avg=31194.21, stdev=9058.51 00:18:54.754 clat (usec): min=255, max=970, avg=559.06, stdev=120.61 00:18:54.754 lat (usec): min=266, max=1006, avg=590.25, stdev=123.85 00:18:54.754 clat percentiles (usec): 00:18:54.754 | 1.00th=[ 310], 5.00th=[ 363], 10.00th=[ 412], 20.00th=[ 457], 00:18:54.754 | 30.00th=[ 494], 40.00th=[ 519], 50.00th=[ 553], 60.00th=[ 586], 00:18:54.754 | 70.00th=[ 619], 80.00th=[ 676], 90.00th=[ 734], 95.00th=[ 742], 00:18:54.754 | 99.00th=[ 832], 99.50th=[ 865], 99.90th=[ 971], 99.95th=[ 971], 00:18:54.754 | 99.99th=[ 971] 00:18:54.754 bw ( KiB/s): min= 4087, max= 4087, per=40.03%, avg=4087.00, stdev= 0.00, samples=1 00:18:54.754 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:18:54.754 lat (usec) : 500=17.42%, 750=36.75%, 1000=19.75% 00:18:54.754 lat (msec) : 2=25.45%, 50=0.63% 00:18:54.754 cpu : usr=1.30%, sys=4.40%, ctx=949, majf=0, minf=1 00:18:54.754 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:54.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.754 issued rwts: total=435,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.754 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:54.754 job2: (groupid=0, jobs=1): err= 0: pid=1829894: Sat Dec 7 05:33:57 2024 00:18:54.754 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:18:54.754 slat (nsec): min=7800, max=63419, avg=26882.39, stdev=4138.94 00:18:54.754 clat (usec): min=548, max=1331, avg=979.78, stdev=133.16 00:18:54.754 lat (usec): min=575, max=1358, avg=1006.66, stdev=133.40 00:18:54.754 clat percentiles (usec): 00:18:54.754 | 1.00th=[ 619], 5.00th=[ 717], 10.00th=[ 807], 20.00th=[ 873], 00:18:54.754 | 30.00th=[ 914], 40.00th=[ 963], 50.00th=[ 996], 60.00th=[ 1029], 00:18:54.754 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1139], 95.00th=[ 1172], 00:18:54.754 | 99.00th=[ 1221], 99.50th=[ 1287], 99.90th=[ 1336], 99.95th=[ 1336], 00:18:54.754 | 99.99th=[ 1336] 00:18:54.754 write: IOPS=815, BW=3261KiB/s (3339kB/s)(3264KiB/1001msec); 0 zone resets 00:18:54.754 slat (nsec): min=9833, max=79506, avg=31069.51, stdev=9388.03 00:18:54.754 clat (usec): min=147, max=1511, avg=549.74, stdev=138.24 00:18:54.754 lat (usec): min=159, max=1545, avg=580.81, stdev=141.09 00:18:54.754 clat percentiles (usec): 00:18:54.754 | 1.00th=[ 260], 5.00th=[ 318], 10.00th=[ 379], 20.00th=[ 429], 00:18:54.754 | 30.00th=[ 486], 40.00th=[ 515], 50.00th=[ 545], 60.00th=[ 586], 00:18:54.754 | 70.00th=[ 627], 80.00th=[ 668], 90.00th=[ 717], 95.00th=[ 758], 00:18:54.754 | 99.00th=[ 840], 99.50th=[ 898], 99.90th=[ 1516], 99.95th=[ 1516], 00:18:54.754 | 99.99th=[ 1516] 00:18:54.754 bw ( KiB/s): min= 4087, max= 4087, per=40.03%, avg=4087.00, stdev= 0.00, samples=1 00:18:54.754 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:18:54.754 lat (usec) : 250=0.60%, 500=20.86%, 750=38.86%, 1000=21.01% 00:18:54.754 lat (msec) : 2=18.67% 00:18:54.754 cpu : usr=2.20%, sys=3.80%, ctx=1329, majf=0, minf=1 00:18:54.754 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:54.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.754 issued rwts: total=512,816,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.754 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:54.754 job3: (groupid=0, jobs=1): err= 0: pid=1829895: Sat Dec 7 05:33:57 2024 00:18:54.754 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:18:54.754 slat (nsec): min=8806, max=60256, avg=27091.28, stdev=3381.13 00:18:54.754 clat (usec): min=579, max=40998, avg=1152.13, stdev=1772.29 00:18:54.754 lat (usec): min=588, max=41009, avg=1179.22, stdev=1771.57 00:18:54.755 clat percentiles (usec): 00:18:54.755 | 1.00th=[ 701], 5.00th=[ 783], 10.00th=[ 857], 20.00th=[ 947], 00:18:54.755 | 30.00th=[ 996], 40.00th=[ 1029], 50.00th=[ 1074], 60.00th=[ 1106], 00:18:54.755 | 70.00th=[ 1156], 80.00th=[ 1221], 90.00th=[ 1319], 95.00th=[ 1352], 00:18:54.755 | 99.00th=[ 1450], 99.50th=[ 1516], 99.90th=[41157], 99.95th=[41157], 00:18:54.755 | 99.99th=[41157] 00:18:54.755 write: IOPS=724, BW=2897KiB/s (2967kB/s)(2900KiB/1001msec); 0 zone resets 00:18:54.755 slat (nsec): min=9901, max=62345, avg=32790.64, stdev=6942.30 00:18:54.755 clat (usec): min=142, max=985, avg=499.57, stdev=148.90 00:18:54.755 lat (usec): min=175, max=1019, avg=532.36, stdev=150.28 00:18:54.755 clat percentiles (usec): 00:18:54.755 | 1.00th=[ 241], 5.00th=[ 285], 10.00th=[ 306], 20.00th=[ 359], 00:18:54.755 | 30.00th=[ 412], 40.00th=[ 453], 50.00th=[ 486], 60.00th=[ 529], 00:18:54.755 | 70.00th=[ 578], 80.00th=[ 635], 90.00th=[ 701], 95.00th=[ 742], 00:18:54.755 | 99.00th=[ 906], 99.50th=[ 930], 99.90th=[ 988], 99.95th=[ 988], 00:18:54.755 | 99.99th=[ 988] 00:18:54.755 bw ( KiB/s): min= 4087, max= 4087, per=40.03%, avg=4087.00, stdev= 0.00, samples=1 00:18:54.755 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:18:54.755 lat (usec) : 250=0.89%, 500=29.99%, 750=25.95%, 1000=14.31% 00:18:54.755 lat (msec) : 2=28.78%, 50=0.08% 00:18:54.755 cpu : usr=1.40%, sys=4.40%, ctx=1239, majf=0, minf=1 00:18:54.755 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:54.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.755 issued rwts: total=512,725,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.755 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:54.755 00:18:54.755 Run status group 0 (all jobs): 00:18:54.755 READ: bw=7045KiB/s (7214kB/s), 1238KiB/s-2046KiB/s (1268kB/s-2095kB/s), io=7080KiB (7250kB), run=1001-1005msec 00:18:54.755 WRITE: bw=9.97MiB/s (10.5MB/s), 2038KiB/s-3261KiB/s (2087kB/s-3339kB/s), io=10.0MiB (10.5MB), run=1001-1005msec 00:18:54.755 00:18:54.755 Disk stats (read/write): 00:18:54.755 nvme0n1: ios=319/512, merge=0/0, ticks=530/228, in_queue=758, util=87.07% 00:18:54.755 nvme0n2: ios=319/512, merge=0/0, ticks=1377/236, in_queue=1613, util=88.28% 00:18:54.755 nvme0n3: ios=561/535, merge=0/0, ticks=595/263, in_queue=858, util=95.46% 00:18:54.755 nvme0n4: ios=530/512, merge=0/0, ticks=609/247, in_queue=856, util=97.54% 00:18:54.755 05:33:57 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:54.755 [global] 00:18:54.755 thread=1 00:18:54.755 invalidate=1 00:18:54.755 rw=write 00:18:54.755 time_based=1 00:18:54.755 runtime=1 00:18:54.755 ioengine=libaio 00:18:54.755 direct=1 00:18:54.755 bs=4096 00:18:54.755 iodepth=128 00:18:54.755 norandommap=0 00:18:54.755 numjobs=1 00:18:54.755 00:18:54.755 verify_dump=1 00:18:54.755 verify_backlog=512 00:18:54.755 verify_state_save=0 00:18:54.755 do_verify=1 00:18:54.755 verify=crc32c-intel 00:18:54.755 [job0] 00:18:54.755 filename=/dev/nvme0n1 00:18:54.755 [job1] 00:18:54.755 filename=/dev/nvme0n2 00:18:54.755 [job2] 00:18:54.755 filename=/dev/nvme0n3 00:18:54.755 [job3] 00:18:54.755 filename=/dev/nvme0n4 00:18:54.755 Could not set queue depth (nvme0n1) 00:18:54.755 Could not set queue depth (nvme0n2) 00:18:54.755 Could not set queue depth (nvme0n3) 00:18:54.755 Could not set queue depth (nvme0n4) 00:18:55.015 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:55.015 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:55.015 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:55.015 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:55.015 fio-3.35 00:18:55.015 Starting 4 threads 00:18:56.397 00:18:56.397 job0: (groupid=0, jobs=1): err= 0: pid=1830414: Sat Dec 7 05:33:59 2024 00:18:56.397 read: IOPS=6258, BW=24.4MiB/s (25.6MB/s)(24.5MiB/1002msec) 00:18:56.397 slat (nsec): min=877, max=3717.0k, avg=81390.61, stdev=422839.21 00:18:56.397 clat (usec): min=1348, max=14513, avg=10097.04, stdev=1354.36 00:18:56.397 lat (usec): min=1927, max=14543, avg=10178.43, stdev=1392.20 00:18:56.397 clat percentiles (usec): 00:18:56.397 | 1.00th=[ 5145], 5.00th=[ 7767], 10.00th=[ 8225], 20.00th=[ 9241], 00:18:56.397 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10290], 60.00th=[10552], 00:18:56.397 | 70.00th=[10683], 80.00th=[11076], 90.00th=[11600], 95.00th=[11994], 00:18:56.397 | 99.00th=[13042], 99.50th=[13435], 99.90th=[14091], 99.95th=[14353], 00:18:56.397 | 99.99th=[14484] 00:18:56.397 write: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec); 0 zone resets 00:18:56.397 slat (nsec): min=1537, max=6026.1k, avg=70417.26, stdev=302612.11 00:18:56.397 clat (usec): min=807, max=15864, avg=9583.61, stdev=1665.32 00:18:56.397 lat (usec): min=828, max=15869, avg=9654.03, stdev=1673.28 00:18:56.397 clat percentiles (usec): 00:18:56.397 | 1.00th=[ 3359], 5.00th=[ 6128], 10.00th=[ 8160], 20.00th=[ 9110], 00:18:56.397 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9765], 00:18:56.398 | 70.00th=[10028], 80.00th=[10421], 90.00th=[11207], 95.00th=[11994], 00:18:56.398 | 99.00th=[14091], 99.50th=[15139], 99.90th=[15270], 99.95th=[15270], 00:18:56.398 | 99.99th=[15926] 00:18:56.398 bw ( KiB/s): min=25712, max=27536, per=29.09%, avg=26624.00, stdev=1289.76, samples=2 00:18:56.398 iops : min= 6428, max= 6884, avg=6656.00, stdev=322.44, samples=2 00:18:56.398 lat (usec) : 1000=0.02% 00:18:56.398 lat (msec) : 2=0.12%, 4=0.50%, 10=54.73%, 20=44.63% 00:18:56.398 cpu : usr=2.50%, sys=4.20%, ctx=925, majf=0, minf=1 00:18:56.398 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:18:56.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:56.398 issued rwts: total=6271,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.398 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:56.398 job1: (groupid=0, jobs=1): err= 0: pid=1830415: Sat Dec 7 05:33:59 2024 00:18:56.398 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:18:56.398 slat (nsec): min=937, max=21351k, avg=186424.98, stdev=1287001.35 00:18:56.398 clat (usec): min=6338, max=49409, avg=23984.47, stdev=9765.27 00:18:56.398 lat (usec): min=6357, max=49438, avg=24170.89, stdev=9880.89 00:18:56.398 clat percentiles (usec): 00:18:56.398 | 1.00th=[ 7373], 5.00th=[ 7898], 10.00th=[ 8225], 20.00th=[11994], 00:18:56.398 | 30.00th=[21103], 40.00th=[22938], 50.00th=[23987], 60.00th=[27132], 00:18:56.398 | 70.00th=[28967], 80.00th=[32637], 90.00th=[36439], 95.00th=[36439], 00:18:56.398 | 99.00th=[45351], 99.50th=[46400], 99.90th=[46924], 99.95th=[49021], 00:18:56.398 | 99.99th=[49546] 00:18:56.398 write: IOPS=2865, BW=11.2MiB/s (11.7MB/s)(11.2MiB/1005msec); 0 zone resets 00:18:56.398 slat (nsec): min=1558, max=15330k, avg=176830.00, stdev=967781.92 00:18:56.398 clat (usec): min=404, max=64617, avg=22948.33, stdev=15689.98 00:18:56.398 lat (usec): min=1194, max=64627, avg=23125.16, stdev=15806.14 00:18:56.398 clat percentiles (usec): 00:18:56.398 | 1.00th=[ 5080], 5.00th=[ 8029], 10.00th=[ 8356], 20.00th=[13829], 00:18:56.398 | 30.00th=[14484], 40.00th=[16188], 50.00th=[16909], 60.00th=[19006], 00:18:56.398 | 70.00th=[21627], 80.00th=[27657], 90.00th=[55313], 95.00th=[60556], 00:18:56.398 | 99.00th=[62653], 99.50th=[63701], 99.90th=[64750], 99.95th=[64750], 00:18:56.398 | 99.99th=[64750] 00:18:56.398 bw ( KiB/s): min= 8264, max=13752, per=12.03%, avg=11008.00, stdev=3880.60, samples=2 00:18:56.398 iops : min= 2066, max= 3438, avg=2752.00, stdev=970.15, samples=2 00:18:56.398 lat (usec) : 500=0.02% 00:18:56.398 lat (msec) : 2=0.04%, 10=16.32%, 20=30.17%, 50=46.36%, 100=7.10% 00:18:56.398 cpu : usr=2.19%, sys=3.98%, ctx=251, majf=0, minf=1 00:18:56.398 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:56.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:56.398 issued rwts: total=2560,2880,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.398 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:56.398 job2: (groupid=0, jobs=1): err= 0: pid=1830418: Sat Dec 7 05:33:59 2024 00:18:56.398 read: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec) 00:18:56.398 slat (nsec): min=960, max=9359.9k, avg=77671.16, stdev=553727.40 00:18:56.398 clat (usec): min=2777, max=24146, avg=10556.33, stdev=2836.50 00:18:56.398 lat (usec): min=2781, max=24148, avg=10634.00, stdev=2854.64 00:18:56.398 clat percentiles (usec): 00:18:56.398 | 1.00th=[ 5538], 5.00th=[ 6587], 10.00th=[ 7177], 20.00th=[ 8356], 00:18:56.398 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10814], 00:18:56.398 | 70.00th=[11863], 80.00th=[12780], 90.00th=[14222], 95.00th=[15926], 00:18:56.398 | 99.00th=[17695], 99.50th=[20841], 99.90th=[23462], 99.95th=[24249], 00:18:56.398 | 99.99th=[24249] 00:18:56.398 write: IOPS=6661, BW=26.0MiB/s (27.3MB/s)(26.2MiB/1005msec); 0 zone resets 00:18:56.398 slat (nsec): min=1647, max=13754k, avg=66014.92, stdev=510126.85 00:18:56.398 clat (usec): min=1119, max=24144, avg=8546.22, stdev=2953.03 00:18:56.398 lat (usec): min=1131, max=24147, avg=8612.23, stdev=2959.55 00:18:56.398 clat percentiles (usec): 00:18:56.398 | 1.00th=[ 3687], 5.00th=[ 4686], 10.00th=[ 5014], 20.00th=[ 5669], 00:18:56.398 | 30.00th=[ 6587], 40.00th=[ 7570], 50.00th=[ 7963], 60.00th=[ 9110], 00:18:56.398 | 70.00th=[10028], 80.00th=[10814], 90.00th=[13173], 95.00th=[14615], 00:18:56.398 | 99.00th=[15270], 99.50th=[15270], 99.90th=[16450], 99.95th=[19792], 00:18:56.398 | 99.99th=[24249] 00:18:56.398 bw ( KiB/s): min=26312, max=26936, per=29.09%, avg=26624.00, stdev=441.23, samples=2 00:18:56.398 iops : min= 6578, max= 6734, avg=6656.00, stdev=110.31, samples=2 00:18:56.398 lat (msec) : 2=0.04%, 4=0.68%, 10=59.97%, 20=39.02%, 50=0.29% 00:18:56.398 cpu : usr=5.48%, sys=7.77%, ctx=294, majf=0, minf=2 00:18:56.398 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:18:56.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:56.398 issued rwts: total=6656,6695,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.398 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:56.398 job3: (groupid=0, jobs=1): err= 0: pid=1830419: Sat Dec 7 05:33:59 2024 00:18:56.398 read: IOPS=7020, BW=27.4MiB/s (28.8MB/s)(28.7MiB/1045msec) 00:18:56.398 slat (nsec): min=905, max=9557.5k, avg=69617.25, stdev=404615.55 00:18:56.398 clat (usec): min=5416, max=51248, avg=9397.86, stdev=5410.72 00:18:56.398 lat (usec): min=5662, max=53929, avg=9467.48, stdev=5425.25 00:18:56.398 clat percentiles (usec): 00:18:56.398 | 1.00th=[ 6325], 5.00th=[ 7177], 10.00th=[ 7504], 20.00th=[ 7963], 00:18:56.398 | 30.00th=[ 8160], 40.00th=[ 8225], 50.00th=[ 8356], 60.00th=[ 8455], 00:18:56.398 | 70.00th=[ 8717], 80.00th=[ 9110], 90.00th=[10290], 95.00th=[12387], 00:18:56.398 | 99.00th=[47973], 99.50th=[51119], 99.90th=[51119], 99.95th=[51119], 00:18:56.398 | 99.99th=[51119] 00:18:56.398 write: IOPS=7349, BW=28.7MiB/s (30.1MB/s)(30.0MiB/1045msec); 0 zone resets 00:18:56.398 slat (nsec): min=1566, max=6514.0k, avg=61136.03, stdev=314115.16 00:18:56.398 clat (usec): min=1184, max=20363, avg=8282.94, stdev=1417.26 00:18:56.398 lat (usec): min=1195, max=20367, avg=8344.08, stdev=1434.24 00:18:56.398 clat percentiles (usec): 00:18:56.398 | 1.00th=[ 5276], 5.00th=[ 6849], 10.00th=[ 7242], 20.00th=[ 7504], 00:18:56.398 | 30.00th=[ 7701], 40.00th=[ 7898], 50.00th=[ 8094], 60.00th=[ 8455], 00:18:56.398 | 70.00th=[ 8586], 80.00th=[ 8717], 90.00th=[ 9110], 95.00th=[10159], 00:18:56.398 | 99.00th=[14615], 99.50th=[16909], 99.90th=[17957], 99.95th=[17957], 00:18:56.398 | 99.99th=[20317] 00:18:56.398 bw ( KiB/s): min=28936, max=32504, per=33.56%, avg=30720.00, stdev=2522.96, samples=2 00:18:56.398 iops : min= 7234, max= 8126, avg=7680.00, stdev=630.74, samples=2 00:18:56.398 lat (msec) : 2=0.05%, 10=91.78%, 20=7.15%, 50=0.73%, 100=0.28% 00:18:56.398 cpu : usr=2.78%, sys=3.45%, ctx=931, majf=0, minf=1 00:18:56.398 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:18:56.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:56.398 issued rwts: total=7336,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.398 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:56.398 00:18:56.398 Run status group 0 (all jobs): 00:18:56.398 READ: bw=85.3MiB/s (89.5MB/s), 9.95MiB/s-27.4MiB/s (10.4MB/s-28.8MB/s), io=89.2MiB (93.5MB), run=1002-1045msec 00:18:56.398 WRITE: bw=89.4MiB/s (93.7MB/s), 11.2MiB/s-28.7MiB/s (11.7MB/s-30.1MB/s), io=93.4MiB (97.9MB), run=1002-1045msec 00:18:56.398 00:18:56.398 Disk stats (read/write): 00:18:56.398 nvme0n1: ios=5253/5632, merge=0/0, ticks=18135/19070, in_queue=37205, util=86.87% 00:18:56.398 nvme0n2: ios=2084/2519, merge=0/0, ticks=22682/28571, in_queue=51253, util=86.94% 00:18:56.398 nvme0n3: ios=5385/5632, merge=0/0, ticks=53964/46161, in_queue=100125, util=88.27% 00:18:56.398 nvme0n4: ios=6178/6259, merge=0/0, ticks=18709/18546, in_queue=37255, util=91.44% 00:18:56.399 05:33:59 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:56.399 [global] 00:18:56.399 thread=1 00:18:56.399 invalidate=1 00:18:56.399 rw=randwrite 00:18:56.399 time_based=1 00:18:56.399 runtime=1 00:18:56.399 ioengine=libaio 00:18:56.399 direct=1 00:18:56.399 bs=4096 00:18:56.399 iodepth=128 00:18:56.399 norandommap=0 00:18:56.399 numjobs=1 00:18:56.399 00:18:56.399 verify_dump=1 00:18:56.399 verify_backlog=512 00:18:56.399 verify_state_save=0 00:18:56.399 do_verify=1 00:18:56.399 verify=crc32c-intel 00:18:56.399 [job0] 00:18:56.399 filename=/dev/nvme0n1 00:18:56.399 [job1] 00:18:56.399 filename=/dev/nvme0n2 00:18:56.399 [job2] 00:18:56.399 filename=/dev/nvme0n3 00:18:56.399 [job3] 00:18:56.399 filename=/dev/nvme0n4 00:18:56.399 Could not set queue depth (nvme0n1) 00:18:56.399 Could not set queue depth (nvme0n2) 00:18:56.399 Could not set queue depth (nvme0n3) 00:18:56.399 Could not set queue depth (nvme0n4) 00:18:56.969 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:56.969 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:56.969 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:56.969 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:56.969 fio-3.35 00:18:56.969 Starting 4 threads 00:18:58.352 00:18:58.352 job0: (groupid=0, jobs=1): err= 0: pid=1830945: Sat Dec 7 05:34:01 2024 00:18:58.352 read: IOPS=8643, BW=33.8MiB/s (35.4MB/s)(34.0MiB/1007msec) 00:18:58.352 slat (nsec): min=899, max=6549.1k, avg=53262.47, stdev=407252.60 00:18:58.352 clat (usec): min=995, max=14677, avg=7229.09, stdev=1869.38 00:18:58.352 lat (usec): min=1010, max=16547, avg=7282.35, stdev=1894.75 00:18:58.352 clat percentiles (usec): 00:18:58.352 | 1.00th=[ 2671], 5.00th=[ 4752], 10.00th=[ 5211], 20.00th=[ 5800], 00:18:58.352 | 30.00th=[ 6194], 40.00th=[ 6718], 50.00th=[ 6915], 60.00th=[ 7373], 00:18:58.352 | 70.00th=[ 7832], 80.00th=[ 8586], 90.00th=[ 9634], 95.00th=[10814], 00:18:58.352 | 99.00th=[12518], 99.50th=[13435], 99.90th=[14615], 99.95th=[14615], 00:18:58.352 | 99.99th=[14615] 00:18:58.352 write: IOPS=9118, BW=35.6MiB/s (37.3MB/s)(35.9MiB/1007msec); 0 zone resets 00:18:58.352 slat (nsec): min=1458, max=21924k, avg=50348.34, stdev=394324.53 00:18:58.352 clat (usec): min=682, max=34888, avg=7066.17, stdev=3508.06 00:18:58.352 lat (usec): min=691, max=34919, avg=7116.51, stdev=3524.27 00:18:58.352 clat percentiles (usec): 00:18:58.352 | 1.00th=[ 2180], 5.00th=[ 3523], 10.00th=[ 4293], 20.00th=[ 5211], 00:18:58.352 | 30.00th=[ 5866], 40.00th=[ 6325], 50.00th=[ 6652], 60.00th=[ 6915], 00:18:58.352 | 70.00th=[ 7177], 80.00th=[ 8160], 90.00th=[ 9503], 95.00th=[10683], 00:18:58.352 | 99.00th=[25560], 99.50th=[30540], 99.90th=[30540], 99.95th=[30540], 00:18:58.352 | 99.99th=[34866] 00:18:58.352 bw ( KiB/s): min=35312, max=37128, per=36.63%, avg=36220.00, stdev=1284.11, samples=2 00:18:58.352 iops : min= 8828, max= 9282, avg=9055.00, stdev=321.03, samples=2 00:18:58.352 lat (usec) : 750=0.03%, 1000=0.01% 00:18:58.352 lat (msec) : 2=0.37%, 4=5.07%, 10=86.34%, 20=7.11%, 50=1.08% 00:18:58.352 cpu : usr=6.46%, sys=8.45%, ctx=629, majf=0, minf=2 00:18:58.352 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:18:58.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:58.352 issued rwts: total=8704,9182,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.352 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:58.352 job1: (groupid=0, jobs=1): err= 0: pid=1830946: Sat Dec 7 05:34:01 2024 00:18:58.352 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:18:58.352 slat (nsec): min=943, max=27038k, avg=91123.25, stdev=972739.27 00:18:58.352 clat (usec): min=2361, max=69476, avg=14326.95, stdev=11337.26 00:18:58.352 lat (usec): min=2368, max=69501, avg=14418.07, stdev=11428.84 00:18:58.352 clat percentiles (usec): 00:18:58.352 | 1.00th=[ 3130], 5.00th=[ 5866], 10.00th=[ 6063], 20.00th=[ 6980], 00:18:58.352 | 30.00th=[ 7898], 40.00th=[ 8717], 50.00th=[ 9503], 60.00th=[11076], 00:18:58.352 | 70.00th=[12387], 80.00th=[20579], 90.00th=[31327], 95.00th=[42206], 00:18:58.352 | 99.00th=[53216], 99.50th=[53216], 99.90th=[53216], 99.95th=[56361], 00:18:58.352 | 99.99th=[69731] 00:18:58.352 write: IOPS=4379, BW=17.1MiB/s (17.9MB/s)(17.2MiB/1008msec); 0 zone resets 00:18:58.352 slat (usec): min=2, max=19861, avg=123.43, stdev=982.29 00:18:58.352 clat (usec): min=1011, max=77510, avg=15678.68, stdev=15185.31 00:18:58.352 lat (usec): min=1020, max=77519, avg=15802.11, stdev=15311.91 00:18:58.352 clat percentiles (usec): 00:18:58.352 | 1.00th=[ 2606], 5.00th=[ 4490], 10.00th=[ 5014], 20.00th=[ 6325], 00:18:58.352 | 30.00th=[ 7177], 40.00th=[ 7504], 50.00th=[ 9503], 60.00th=[10290], 00:18:58.352 | 70.00th=[16057], 80.00th=[21627], 90.00th=[33162], 95.00th=[55313], 00:18:58.352 | 99.00th=[69731], 99.50th=[73925], 99.90th=[77071], 99.95th=[77071], 00:18:58.352 | 99.99th=[77071] 00:18:58.352 bw ( KiB/s): min= 9680, max=24624, per=17.34%, avg=17152.00, stdev=10567.00, samples=2 00:18:58.352 iops : min= 2420, max= 6156, avg=4288.00, stdev=2641.75, samples=2 00:18:58.352 lat (msec) : 2=0.49%, 4=1.92%, 10=53.64%, 20=22.37%, 50=17.21% 00:18:58.352 lat (msec) : 100=4.37% 00:18:58.352 cpu : usr=3.18%, sys=5.76%, ctx=211, majf=0, minf=1 00:18:58.352 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:58.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:58.352 issued rwts: total=4096,4415,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.352 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:58.352 job2: (groupid=0, jobs=1): err= 0: pid=1830947: Sat Dec 7 05:34:01 2024 00:18:58.352 read: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec) 00:18:58.352 slat (nsec): min=952, max=16114k, avg=85021.73, stdev=719004.95 00:18:58.352 clat (usec): min=2182, max=40125, avg=11654.83, stdev=6710.68 00:18:58.352 lat (usec): min=2187, max=40132, avg=11739.85, stdev=6760.04 00:18:58.352 clat percentiles (usec): 00:18:58.352 | 1.00th=[ 4113], 5.00th=[ 5669], 10.00th=[ 6259], 20.00th=[ 6783], 00:18:58.352 | 30.00th=[ 7504], 40.00th=[ 8291], 50.00th=[ 8979], 60.00th=[ 9896], 00:18:58.352 | 70.00th=[12125], 80.00th=[16581], 90.00th=[23462], 95.00th=[27132], 00:18:58.352 | 99.00th=[30802], 99.50th=[36439], 99.90th=[40109], 99.95th=[40109], 00:18:58.352 | 99.99th=[40109] 00:18:58.352 write: IOPS=6198, BW=24.2MiB/s (25.4MB/s)(24.3MiB/1005msec); 0 zone resets 00:18:58.352 slat (nsec): min=1594, max=13095k, avg=67171.94, stdev=586528.33 00:18:58.352 clat (usec): min=1242, max=29063, avg=8983.96, stdev=3732.97 00:18:58.352 lat (usec): min=1253, max=29074, avg=9051.13, stdev=3774.46 00:18:58.352 clat percentiles (usec): 00:18:58.352 | 1.00th=[ 3163], 5.00th=[ 4293], 10.00th=[ 4817], 20.00th=[ 6849], 00:18:58.352 | 30.00th=[ 7439], 40.00th=[ 7832], 50.00th=[ 8029], 60.00th=[ 8356], 00:18:58.352 | 70.00th=[ 9896], 80.00th=[11338], 90.00th=[13435], 95.00th=[14615], 00:18:58.352 | 99.00th=[21890], 99.50th=[27132], 99.90th=[27395], 99.95th=[27395], 00:18:58.352 | 99.99th=[28967] 00:18:58.352 bw ( KiB/s): min=17080, max=32072, per=24.85%, avg=24576.00, stdev=10600.94, samples=2 00:18:58.352 iops : min= 4270, max= 8018, avg=6144.00, stdev=2650.24, samples=2 00:18:58.352 lat (msec) : 2=0.09%, 4=2.25%, 10=63.74%, 20=26.53%, 50=7.40% 00:18:58.352 cpu : usr=4.48%, sys=6.97%, ctx=332, majf=0, minf=1 00:18:58.352 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:18:58.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:58.352 issued rwts: total=6144,6229,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.352 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:58.352 job3: (groupid=0, jobs=1): err= 0: pid=1830948: Sat Dec 7 05:34:01 2024 00:18:58.352 read: IOPS=4870, BW=19.0MiB/s (19.9MB/s)(19.2MiB/1009msec) 00:18:58.352 slat (usec): min=2, max=20686, avg=108.95, stdev=886.52 00:18:58.352 clat (usec): min=2562, max=42514, avg=14872.38, stdev=5982.28 00:18:58.352 lat (usec): min=4475, max=42542, avg=14981.33, stdev=6034.60 00:18:58.352 clat percentiles (usec): 00:18:58.352 | 1.00th=[ 5276], 5.00th=[ 7439], 10.00th=[ 8225], 20.00th=[ 9896], 00:18:58.352 | 30.00th=[10814], 40.00th=[12256], 50.00th=[13435], 60.00th=[15664], 00:18:58.352 | 70.00th=[16581], 80.00th=[20317], 90.00th=[22676], 95.00th=[26084], 00:18:58.352 | 99.00th=[33162], 99.50th=[33162], 99.90th=[34866], 99.95th=[34866], 00:18:58.352 | 99.99th=[42730] 00:18:58.352 write: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec); 0 zone resets 00:18:58.352 slat (usec): min=3, max=12008, avg=85.16, stdev=712.04 00:18:58.352 clat (usec): min=1196, max=28544, avg=10678.49, stdev=4100.90 00:18:58.352 lat (usec): min=1206, max=29176, avg=10763.65, stdev=4124.31 00:18:58.352 clat percentiles (usec): 00:18:58.352 | 1.00th=[ 4817], 5.00th=[ 5407], 10.00th=[ 6063], 20.00th=[ 7111], 00:18:58.352 | 30.00th=[ 8160], 40.00th=[ 8717], 50.00th=[ 9765], 60.00th=[11076], 00:18:58.352 | 70.00th=[12649], 80.00th=[14091], 90.00th=[16450], 95.00th=[18744], 00:18:58.352 | 99.00th=[21365], 99.50th=[21627], 99.90th=[24249], 99.95th=[24249], 00:18:58.352 | 99.99th=[28443] 00:18:58.352 bw ( KiB/s): min=18136, max=22824, per=20.71%, avg=20480.00, stdev=3314.92, samples=2 00:18:58.352 iops : min= 4534, max= 5706, avg=5120.00, stdev=828.73, samples=2 00:18:58.352 lat (msec) : 2=0.08%, 4=0.08%, 10=36.95%, 20=50.69%, 50=12.20% 00:18:58.352 cpu : usr=3.27%, sys=7.14%, ctx=183, majf=0, minf=1 00:18:58.352 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:58.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:58.352 issued rwts: total=4914,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.352 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:58.352 00:18:58.352 Run status group 0 (all jobs): 00:18:58.352 READ: bw=92.4MiB/s (96.8MB/s), 15.9MiB/s-33.8MiB/s (16.6MB/s-35.4MB/s), io=93.2MiB (97.7MB), run=1005-1009msec 00:18:58.352 WRITE: bw=96.6MiB/s (101MB/s), 17.1MiB/s-35.6MiB/s (17.9MB/s-37.3MB/s), io=97.4MiB (102MB), run=1005-1009msec 00:18:58.352 00:18:58.352 Disk stats (read/write): 00:18:58.352 nvme0n1: ios=7218/7631, merge=0/0, ticks=45611/46729, in_queue=92340, util=86.77% 00:18:58.352 nvme0n2: ios=3622/3599, merge=0/0, ticks=36946/45882, in_queue=82828, util=87.55% 00:18:58.352 nvme0n3: ios=4608/4999, merge=0/0, ticks=56522/44855, in_queue=101377, util=88.48% 00:18:58.352 nvme0n4: ios=4143/4389, merge=0/0, ticks=58496/42859, in_queue=101355, util=98.07% 00:18:58.352 05:34:01 -- target/fio.sh@55 -- # sync 00:18:58.352 05:34:01 -- target/fio.sh@59 -- # fio_pid=1831260 00:18:58.352 05:34:01 -- target/fio.sh@61 -- # sleep 3 00:18:58.352 05:34:01 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:58.352 [global] 00:18:58.352 thread=1 00:18:58.352 invalidate=1 00:18:58.352 rw=read 00:18:58.352 time_based=1 00:18:58.352 runtime=10 00:18:58.352 ioengine=libaio 00:18:58.352 direct=1 00:18:58.352 bs=4096 00:18:58.352 iodepth=1 00:18:58.352 norandommap=1 00:18:58.352 numjobs=1 00:18:58.352 00:18:58.352 [job0] 00:18:58.352 filename=/dev/nvme0n1 00:18:58.352 [job1] 00:18:58.352 filename=/dev/nvme0n2 00:18:58.352 [job2] 00:18:58.353 filename=/dev/nvme0n3 00:18:58.353 [job3] 00:18:58.353 filename=/dev/nvme0n4 00:18:58.353 Could not set queue depth (nvme0n1) 00:18:58.353 Could not set queue depth (nvme0n2) 00:18:58.353 Could not set queue depth (nvme0n3) 00:18:58.353 Could not set queue depth (nvme0n4) 00:18:58.353 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:58.353 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:58.353 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:58.353 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:58.353 fio-3.35 00:18:58.353 Starting 4 threads 00:19:01.659 05:34:04 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:01.659 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=10080256, buflen=4096 00:19:01.659 fio: pid=1831475, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:19:01.659 05:34:04 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:01.659 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=9883648, buflen=4096 00:19:01.659 fio: pid=1831474, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:19:01.659 05:34:04 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:01.659 05:34:04 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:01.659 05:34:04 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:01.659 05:34:04 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:01.659 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=1273856, buflen=4096 00:19:01.659 fio: pid=1831472, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:19:01.659 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=11964416, buflen=4096 00:19:01.659 fio: pid=1831473, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:19:01.659 05:34:04 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:01.659 05:34:04 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:01.921 00:19:01.921 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1831472: Sat Dec 7 05:34:04 2024 00:19:01.921 read: IOPS=105, BW=419KiB/s (429kB/s)(1244KiB/2968msec) 00:19:01.921 slat (usec): min=7, max=37585, avg=243.40, stdev=2447.60 00:19:01.921 clat (usec): min=515, max=42132, avg=9262.53, stdev=16452.18 00:19:01.921 lat (usec): min=541, max=56942, avg=9506.63, stdev=16646.46 00:19:01.921 clat percentiles (usec): 00:19:01.921 | 1.00th=[ 619], 5.00th=[ 799], 10.00th=[ 865], 20.00th=[ 930], 00:19:01.921 | 30.00th=[ 971], 40.00th=[ 996], 50.00th=[ 1029], 60.00th=[ 1074], 00:19:01.921 | 70.00th=[ 1106], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:19:01.921 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:01.921 | 99.99th=[42206] 00:19:01.921 bw ( KiB/s): min= 96, max= 1344, per=3.32%, avg=345.60, stdev=558.12, samples=5 00:19:01.921 iops : min= 24, max= 336, avg=86.40, stdev=139.53, samples=5 00:19:01.921 lat (usec) : 750=3.21%, 1000=38.14% 00:19:01.921 lat (msec) : 2=38.14%, 50=20.19% 00:19:01.921 cpu : usr=0.03%, sys=0.37%, ctx=315, majf=0, minf=1 00:19:01.921 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.921 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.921 issued rwts: total=312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.921 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:01.921 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=1831473: Sat Dec 7 05:34:04 2024 00:19:01.921 read: IOPS=937, BW=3747KiB/s (3837kB/s)(11.4MiB/3118msec) 00:19:01.921 slat (usec): min=6, max=22594, avg=51.82, stdev=653.78 00:19:01.921 clat (usec): min=425, max=41990, avg=1009.07, stdev=1338.69 00:19:01.921 lat (usec): min=464, max=42015, avg=1058.66, stdev=1485.06 00:19:01.921 clat percentiles (usec): 00:19:01.921 | 1.00th=[ 668], 5.00th=[ 775], 10.00th=[ 832], 20.00th=[ 898], 00:19:01.921 | 30.00th=[ 930], 40.00th=[ 955], 50.00th=[ 979], 60.00th=[ 996], 00:19:01.921 | 70.00th=[ 1012], 80.00th=[ 1037], 90.00th=[ 1057], 95.00th=[ 1090], 00:19:01.921 | 99.00th=[ 1156], 99.50th=[ 1205], 99.90th=[41681], 99.95th=[41681], 00:19:01.921 | 99.99th=[42206] 00:19:01.921 bw ( KiB/s): min= 2957, max= 4008, per=36.65%, avg=3811.50, stdev=419.06, samples=6 00:19:01.922 iops : min= 739, max= 1002, avg=952.83, stdev=104.87, samples=6 00:19:01.922 lat (usec) : 500=0.14%, 750=3.80%, 1000=59.48% 00:19:01.922 lat (msec) : 2=36.34%, 10=0.07%, 20=0.03%, 50=0.10% 00:19:01.922 cpu : usr=1.67%, sys=3.82%, ctx=2927, majf=0, minf=2 00:19:01.922 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.922 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.922 issued rwts: total=2922,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.922 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:01.922 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1831474: Sat Dec 7 05:34:04 2024 00:19:01.922 read: IOPS=870, BW=3479KiB/s (3563kB/s)(9652KiB/2774msec) 00:19:01.922 slat (usec): min=7, max=15577, avg=39.03, stdev=409.18 00:19:01.922 clat (usec): min=503, max=2307, avg=1093.09, stdev=141.97 00:19:01.922 lat (usec): min=530, max=16817, avg=1132.13, stdev=436.15 00:19:01.922 clat percentiles (usec): 00:19:01.922 | 1.00th=[ 734], 5.00th=[ 840], 10.00th=[ 906], 20.00th=[ 971], 00:19:01.922 | 30.00th=[ 1012], 40.00th=[ 1057], 50.00th=[ 1123], 60.00th=[ 1172], 00:19:01.922 | 70.00th=[ 1188], 80.00th=[ 1221], 90.00th=[ 1254], 95.00th=[ 1270], 00:19:01.922 | 99.00th=[ 1319], 99.50th=[ 1336], 99.90th=[ 1369], 99.95th=[ 1401], 00:19:01.922 | 99.99th=[ 2311] 00:19:01.922 bw ( KiB/s): min= 3504, max= 3640, per=34.24%, avg=3561.60, stdev=51.97, samples=5 00:19:01.922 iops : min= 876, max= 910, avg=890.40, stdev=12.99, samples=5 00:19:01.922 lat (usec) : 750=1.28%, 1000=26.30% 00:19:01.922 lat (msec) : 2=72.33%, 4=0.04% 00:19:01.922 cpu : usr=1.30%, sys=3.79%, ctx=2417, majf=0, minf=2 00:19:01.922 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.922 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.922 issued rwts: total=2414,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.922 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:01.922 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1831475: Sat Dec 7 05:34:04 2024 00:19:01.922 read: IOPS=954, BW=3817KiB/s (3909kB/s)(9844KiB/2579msec) 00:19:01.922 slat (nsec): min=7388, max=81736, avg=25807.97, stdev=3120.10 00:19:01.922 clat (usec): min=464, max=1237, avg=1011.83, stdev=79.54 00:19:01.922 lat (usec): min=489, max=1261, avg=1037.64, stdev=79.56 00:19:01.922 clat percentiles (usec): 00:19:01.922 | 1.00th=[ 775], 5.00th=[ 857], 10.00th=[ 906], 20.00th=[ 963], 00:19:01.922 | 30.00th=[ 988], 40.00th=[ 1012], 50.00th=[ 1029], 60.00th=[ 1037], 00:19:01.922 | 70.00th=[ 1057], 80.00th=[ 1074], 90.00th=[ 1090], 95.00th=[ 1123], 00:19:01.922 | 99.00th=[ 1172], 99.50th=[ 1188], 99.90th=[ 1237], 99.95th=[ 1237], 00:19:01.922 | 99.99th=[ 1237] 00:19:01.922 bw ( KiB/s): min= 3784, max= 3872, per=36.89%, avg=3836.80, stdev=32.79, samples=5 00:19:01.922 iops : min= 946, max= 968, avg=959.20, stdev= 8.20, samples=5 00:19:01.922 lat (usec) : 500=0.04%, 750=0.41%, 1000=34.57% 00:19:01.922 lat (msec) : 2=64.95% 00:19:01.922 cpu : usr=0.97%, sys=2.95%, ctx=2465, majf=0, minf=2 00:19:01.922 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.922 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.922 issued rwts: total=2462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.922 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:01.922 00:19:01.922 Run status group 0 (all jobs): 00:19:01.922 READ: bw=10.2MiB/s (10.6MB/s), 419KiB/s-3817KiB/s (429kB/s-3909kB/s), io=31.7MiB (33.2MB), run=2579-3118msec 00:19:01.922 00:19:01.922 Disk stats (read/write): 00:19:01.922 nvme0n1: ios=270/0, merge=0/0, ticks=2755/0, in_queue=2755, util=92.52% 00:19:01.922 nvme0n2: ios=2920/0, merge=0/0, ticks=2723/0, in_queue=2723, util=93.65% 00:19:01.922 nvme0n3: ios=2296/0, merge=0/0, ticks=2216/0, in_queue=2216, util=95.99% 00:19:01.922 nvme0n4: ios=2237/0, merge=0/0, ticks=2200/0, in_queue=2200, util=96.02% 00:19:01.922 05:34:05 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:01.922 05:34:05 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:02.181 05:34:05 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:02.181 05:34:05 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:02.181 05:34:05 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:02.181 05:34:05 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:02.441 05:34:05 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:02.441 05:34:05 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:02.700 05:34:05 -- target/fio.sh@69 -- # fio_status=0 00:19:02.700 05:34:05 -- target/fio.sh@70 -- # wait 1831260 00:19:02.700 05:34:05 -- target/fio.sh@70 -- # fio_status=4 00:19:02.700 05:34:05 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:02.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:02.701 05:34:05 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:02.701 05:34:05 -- common/autotest_common.sh@1208 -- # local i=0 00:19:02.701 05:34:05 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:02.701 05:34:05 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:02.701 05:34:05 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:02.701 05:34:05 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:02.701 05:34:05 -- common/autotest_common.sh@1220 -- # return 0 00:19:02.701 05:34:05 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:02.701 05:34:05 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:02.701 nvmf hotplug test: fio failed as expected 00:19:02.701 05:34:05 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:02.960 05:34:06 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:02.960 05:34:06 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:02.960 05:34:06 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:02.960 05:34:06 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:02.960 05:34:06 -- target/fio.sh@91 -- # nvmftestfini 00:19:02.960 05:34:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:02.960 05:34:06 -- nvmf/common.sh@116 -- # sync 00:19:02.960 05:34:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:02.960 05:34:06 -- nvmf/common.sh@119 -- # set +e 00:19:02.960 05:34:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:02.960 05:34:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:02.960 rmmod nvme_tcp 00:19:02.960 rmmod nvme_fabrics 00:19:02.960 rmmod nvme_keyring 00:19:02.960 05:34:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:02.960 05:34:06 -- nvmf/common.sh@123 -- # set -e 00:19:02.960 05:34:06 -- nvmf/common.sh@124 -- # return 0 00:19:02.960 05:34:06 -- nvmf/common.sh@477 -- # '[' -n 1827725 ']' 00:19:02.960 05:34:06 -- nvmf/common.sh@478 -- # killprocess 1827725 00:19:02.960 05:34:06 -- common/autotest_common.sh@936 -- # '[' -z 1827725 ']' 00:19:02.960 05:34:06 -- common/autotest_common.sh@940 -- # kill -0 1827725 00:19:02.960 05:34:06 -- common/autotest_common.sh@941 -- # uname 00:19:02.960 05:34:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:02.960 05:34:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1827725 00:19:03.221 05:34:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:03.221 05:34:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:03.221 05:34:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1827725' 00:19:03.221 killing process with pid 1827725 00:19:03.221 05:34:06 -- common/autotest_common.sh@955 -- # kill 1827725 00:19:03.221 05:34:06 -- common/autotest_common.sh@960 -- # wait 1827725 00:19:03.221 05:34:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:03.221 05:34:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:03.221 05:34:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:03.221 05:34:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:03.221 05:34:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:03.221 05:34:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.221 05:34:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:03.221 05:34:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.768 05:34:08 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:05.768 00:19:05.768 real 0m28.889s 00:19:05.768 user 2m33.846s 00:19:05.768 sys 0m9.681s 00:19:05.768 05:34:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:05.768 05:34:08 -- common/autotest_common.sh@10 -- # set +x 00:19:05.768 ************************************ 00:19:05.768 END TEST nvmf_fio_target 00:19:05.768 ************************************ 00:19:05.768 05:34:08 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:05.768 05:34:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:05.768 05:34:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:05.768 05:34:08 -- common/autotest_common.sh@10 -- # set +x 00:19:05.768 ************************************ 00:19:05.768 START TEST nvmf_bdevio 00:19:05.768 ************************************ 00:19:05.768 05:34:08 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:05.768 * Looking for test storage... 00:19:05.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:05.768 05:34:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:05.768 05:34:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:05.768 05:34:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:05.768 05:34:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:05.768 05:34:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:05.768 05:34:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:05.768 05:34:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:05.768 05:34:08 -- scripts/common.sh@335 -- # IFS=.-: 00:19:05.768 05:34:08 -- scripts/common.sh@335 -- # read -ra ver1 00:19:05.768 05:34:08 -- scripts/common.sh@336 -- # IFS=.-: 00:19:05.768 05:34:08 -- scripts/common.sh@336 -- # read -ra ver2 00:19:05.768 05:34:08 -- scripts/common.sh@337 -- # local 'op=<' 00:19:05.768 05:34:08 -- scripts/common.sh@339 -- # ver1_l=2 00:19:05.768 05:34:08 -- scripts/common.sh@340 -- # ver2_l=1 00:19:05.768 05:34:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:05.768 05:34:08 -- scripts/common.sh@343 -- # case "$op" in 00:19:05.768 05:34:08 -- scripts/common.sh@344 -- # : 1 00:19:05.768 05:34:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:05.768 05:34:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:05.768 05:34:08 -- scripts/common.sh@364 -- # decimal 1 00:19:05.768 05:34:08 -- scripts/common.sh@352 -- # local d=1 00:19:05.768 05:34:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:05.768 05:34:08 -- scripts/common.sh@354 -- # echo 1 00:19:05.768 05:34:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:05.768 05:34:08 -- scripts/common.sh@365 -- # decimal 2 00:19:05.768 05:34:08 -- scripts/common.sh@352 -- # local d=2 00:19:05.768 05:34:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:05.768 05:34:08 -- scripts/common.sh@354 -- # echo 2 00:19:05.768 05:34:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:05.768 05:34:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:05.768 05:34:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:05.768 05:34:08 -- scripts/common.sh@367 -- # return 0 00:19:05.768 05:34:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:05.768 05:34:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:05.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.768 --rc genhtml_branch_coverage=1 00:19:05.768 --rc genhtml_function_coverage=1 00:19:05.768 --rc genhtml_legend=1 00:19:05.768 --rc geninfo_all_blocks=1 00:19:05.768 --rc geninfo_unexecuted_blocks=1 00:19:05.768 00:19:05.768 ' 00:19:05.768 05:34:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:05.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.768 --rc genhtml_branch_coverage=1 00:19:05.768 --rc genhtml_function_coverage=1 00:19:05.768 --rc genhtml_legend=1 00:19:05.768 --rc geninfo_all_blocks=1 00:19:05.768 --rc geninfo_unexecuted_blocks=1 00:19:05.768 00:19:05.768 ' 00:19:05.769 05:34:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:05.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.769 --rc genhtml_branch_coverage=1 00:19:05.769 --rc genhtml_function_coverage=1 00:19:05.769 --rc genhtml_legend=1 00:19:05.769 --rc geninfo_all_blocks=1 00:19:05.769 --rc geninfo_unexecuted_blocks=1 00:19:05.769 00:19:05.769 ' 00:19:05.769 05:34:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:05.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.769 --rc genhtml_branch_coverage=1 00:19:05.769 --rc genhtml_function_coverage=1 00:19:05.769 --rc genhtml_legend=1 00:19:05.769 --rc geninfo_all_blocks=1 00:19:05.769 --rc geninfo_unexecuted_blocks=1 00:19:05.769 00:19:05.769 ' 00:19:05.769 05:34:08 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:05.769 05:34:08 -- nvmf/common.sh@7 -- # uname -s 00:19:05.769 05:34:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:05.769 05:34:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:05.769 05:34:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:05.769 05:34:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:05.769 05:34:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:05.769 05:34:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:05.769 05:34:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:05.769 05:34:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:05.769 05:34:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:05.769 05:34:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:05.769 05:34:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:05.769 05:34:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:05.769 05:34:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:05.769 05:34:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:05.769 05:34:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:05.769 05:34:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:05.769 05:34:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:05.769 05:34:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:05.769 05:34:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:05.769 05:34:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.769 05:34:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.769 05:34:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.769 05:34:08 -- paths/export.sh@5 -- # export PATH 00:19:05.769 05:34:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.769 05:34:08 -- nvmf/common.sh@46 -- # : 0 00:19:05.769 05:34:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:05.769 05:34:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:05.769 05:34:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:05.769 05:34:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:05.769 05:34:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:05.769 05:34:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:05.769 05:34:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:05.769 05:34:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:05.769 05:34:08 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:05.769 05:34:08 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:05.769 05:34:08 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:05.769 05:34:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:05.769 05:34:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:05.769 05:34:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:05.769 05:34:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:05.769 05:34:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:05.769 05:34:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:05.769 05:34:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:05.769 05:34:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.769 05:34:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:05.769 05:34:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:05.769 05:34:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:05.769 05:34:08 -- common/autotest_common.sh@10 -- # set +x 00:19:13.907 05:34:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:13.907 05:34:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:13.907 05:34:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:13.907 05:34:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:13.907 05:34:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:13.907 05:34:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:13.907 05:34:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:13.907 05:34:15 -- nvmf/common.sh@294 -- # net_devs=() 00:19:13.907 05:34:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:13.907 05:34:15 -- nvmf/common.sh@295 -- # e810=() 00:19:13.907 05:34:15 -- nvmf/common.sh@295 -- # local -ga e810 00:19:13.907 05:34:15 -- nvmf/common.sh@296 -- # x722=() 00:19:13.907 05:34:15 -- nvmf/common.sh@296 -- # local -ga x722 00:19:13.907 05:34:15 -- nvmf/common.sh@297 -- # mlx=() 00:19:13.907 05:34:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:13.907 05:34:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:13.907 05:34:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:13.907 05:34:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:13.907 05:34:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:13.907 05:34:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:13.907 05:34:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:13.907 05:34:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:13.907 05:34:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:13.907 05:34:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:13.907 05:34:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:13.907 05:34:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:13.907 05:34:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:13.907 05:34:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:13.907 05:34:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:13.907 05:34:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:13.907 05:34:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:13.907 05:34:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:13.907 05:34:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:13.907 05:34:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:13.907 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:13.907 05:34:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:13.907 05:34:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:13.907 05:34:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:13.907 05:34:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:13.907 05:34:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:13.907 05:34:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:13.907 05:34:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:13.907 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:13.907 05:34:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:13.907 05:34:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:13.907 05:34:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:13.907 05:34:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:13.907 05:34:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:13.907 05:34:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:13.907 05:34:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:13.907 05:34:15 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:13.907 05:34:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:13.907 05:34:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:13.907 05:34:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:13.907 05:34:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:13.907 05:34:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:13.907 Found net devices under 0000:31:00.0: cvl_0_0 00:19:13.907 05:34:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:13.907 05:34:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:13.907 05:34:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:13.907 05:34:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:13.907 05:34:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:13.907 05:34:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:13.907 Found net devices under 0000:31:00.1: cvl_0_1 00:19:13.907 05:34:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:13.907 05:34:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:13.907 05:34:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:13.907 05:34:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:13.907 05:34:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:13.907 05:34:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:13.907 05:34:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:13.907 05:34:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:13.907 05:34:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:13.907 05:34:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:13.907 05:34:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:13.907 05:34:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:13.907 05:34:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:13.907 05:34:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:13.907 05:34:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:13.907 05:34:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:13.907 05:34:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:13.907 05:34:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:13.907 05:34:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:13.907 05:34:15 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:13.907 05:34:15 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:13.907 05:34:15 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:13.908 05:34:15 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:13.908 05:34:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:13.908 05:34:16 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:13.908 05:34:16 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:13.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:13.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:19:13.908 00:19:13.908 --- 10.0.0.2 ping statistics --- 00:19:13.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:13.908 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:19:13.908 05:34:16 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:13.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:13.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:19:13.908 00:19:13.908 --- 10.0.0.1 ping statistics --- 00:19:13.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:13.908 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:19:13.908 05:34:16 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:13.908 05:34:16 -- nvmf/common.sh@410 -- # return 0 00:19:13.908 05:34:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:13.908 05:34:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:13.908 05:34:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:13.908 05:34:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:13.908 05:34:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:13.908 05:34:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:13.908 05:34:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:13.908 05:34:16 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:13.908 05:34:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:13.908 05:34:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:13.908 05:34:16 -- common/autotest_common.sh@10 -- # set +x 00:19:13.908 05:34:16 -- nvmf/common.sh@469 -- # nvmfpid=1836591 00:19:13.908 05:34:16 -- nvmf/common.sh@470 -- # waitforlisten 1836591 00:19:13.908 05:34:16 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:13.908 05:34:16 -- common/autotest_common.sh@829 -- # '[' -z 1836591 ']' 00:19:13.908 05:34:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.908 05:34:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:13.908 05:34:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:13.908 05:34:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:13.908 05:34:16 -- common/autotest_common.sh@10 -- # set +x 00:19:13.908 [2024-12-07 05:34:16.145948] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:13.908 [2024-12-07 05:34:16.146021] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:13.908 EAL: No free 2048 kB hugepages reported on node 1 00:19:13.908 [2024-12-07 05:34:16.237534] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:13.908 [2024-12-07 05:34:16.328005] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:13.908 [2024-12-07 05:34:16.328164] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:13.908 [2024-12-07 05:34:16.328173] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:13.908 [2024-12-07 05:34:16.328183] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:13.908 [2024-12-07 05:34:16.328357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:13.908 [2024-12-07 05:34:16.328518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:13.908 [2024-12-07 05:34:16.328680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:13.908 [2024-12-07 05:34:16.328681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:13.908 05:34:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:13.908 05:34:16 -- common/autotest_common.sh@862 -- # return 0 00:19:13.908 05:34:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:13.908 05:34:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:13.908 05:34:16 -- common/autotest_common.sh@10 -- # set +x 00:19:13.908 05:34:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:13.908 05:34:16 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:13.908 05:34:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.908 05:34:16 -- common/autotest_common.sh@10 -- # set +x 00:19:13.908 [2024-12-07 05:34:16.996122] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:13.908 05:34:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.908 05:34:17 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:13.908 05:34:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.908 05:34:17 -- common/autotest_common.sh@10 -- # set +x 00:19:13.908 Malloc0 00:19:13.908 05:34:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.908 05:34:17 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:13.908 05:34:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.908 05:34:17 -- common/autotest_common.sh@10 -- # set +x 00:19:13.908 05:34:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.908 05:34:17 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:13.908 05:34:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.908 05:34:17 -- common/autotest_common.sh@10 -- # set +x 00:19:13.908 05:34:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.908 05:34:17 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:13.908 05:34:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.908 05:34:17 -- common/autotest_common.sh@10 -- # set +x 00:19:13.908 [2024-12-07 05:34:17.054997] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:13.908 05:34:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.908 05:34:17 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:13.908 05:34:17 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:13.908 05:34:17 -- nvmf/common.sh@520 -- # config=() 00:19:13.908 05:34:17 -- nvmf/common.sh@520 -- # local subsystem config 00:19:13.908 05:34:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:13.908 05:34:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:13.908 { 00:19:13.908 "params": { 00:19:13.908 "name": "Nvme$subsystem", 00:19:13.908 "trtype": "$TEST_TRANSPORT", 00:19:13.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:13.908 "adrfam": "ipv4", 00:19:13.908 "trsvcid": "$NVMF_PORT", 00:19:13.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:13.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:13.908 "hdgst": ${hdgst:-false}, 00:19:13.908 "ddgst": ${ddgst:-false} 00:19:13.908 }, 00:19:13.908 "method": "bdev_nvme_attach_controller" 00:19:13.908 } 00:19:13.908 EOF 00:19:13.908 )") 00:19:13.908 05:34:17 -- nvmf/common.sh@542 -- # cat 00:19:13.908 05:34:17 -- nvmf/common.sh@544 -- # jq . 00:19:13.908 05:34:17 -- nvmf/common.sh@545 -- # IFS=, 00:19:13.908 05:34:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:13.908 "params": { 00:19:13.908 "name": "Nvme1", 00:19:13.908 "trtype": "tcp", 00:19:13.908 "traddr": "10.0.0.2", 00:19:13.908 "adrfam": "ipv4", 00:19:13.908 "trsvcid": "4420", 00:19:13.908 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.908 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:13.908 "hdgst": false, 00:19:13.908 "ddgst": false 00:19:13.908 }, 00:19:13.908 "method": "bdev_nvme_attach_controller" 00:19:13.908 }' 00:19:13.908 [2024-12-07 05:34:17.104592] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:13.908 [2024-12-07 05:34:17.104641] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1836899 ] 00:19:13.908 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.168 [2024-12-07 05:34:17.169628] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:14.168 [2024-12-07 05:34:17.236028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:14.168 [2024-12-07 05:34:17.236194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.168 [2024-12-07 05:34:17.236288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.427 [2024-12-07 05:34:17.534822] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:14.427 [2024-12-07 05:34:17.534852] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:14.427 I/O targets: 00:19:14.427 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:14.427 00:19:14.427 00:19:14.427 CUnit - A unit testing framework for C - Version 2.1-3 00:19:14.427 http://cunit.sourceforge.net/ 00:19:14.427 00:19:14.427 00:19:14.427 Suite: bdevio tests on: Nvme1n1 00:19:14.427 Test: blockdev write read block ...passed 00:19:14.427 Test: blockdev write zeroes read block ...passed 00:19:14.427 Test: blockdev write zeroes read no split ...passed 00:19:14.687 Test: blockdev write zeroes read split ...passed 00:19:14.687 Test: blockdev write zeroes read split partial ...passed 00:19:14.687 Test: blockdev reset ...[2024-12-07 05:34:17.748375] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:14.687 [2024-12-07 05:34:17.748438] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13224a0 (9): Bad file descriptor 00:19:14.687 [2024-12-07 05:34:17.765354] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:14.687 passed 00:19:14.687 Test: blockdev write read 8 blocks ...passed 00:19:14.687 Test: blockdev write read size > 128k ...passed 00:19:14.687 Test: blockdev write read invalid size ...passed 00:19:14.687 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:14.687 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:14.687 Test: blockdev write read max offset ...passed 00:19:14.947 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:14.947 Test: blockdev writev readv 8 blocks ...passed 00:19:14.947 Test: blockdev writev readv 30 x 1block ...passed 00:19:14.947 Test: blockdev writev readv block ...passed 00:19:14.947 Test: blockdev writev readv size > 128k ...passed 00:19:14.947 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:14.947 Test: blockdev comparev and writev ...[2024-12-07 05:34:18.028620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:14.947 [2024-12-07 05:34:18.028644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:14.947 [2024-12-07 05:34:18.028655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:14.947 [2024-12-07 05:34:18.028665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.947 [2024-12-07 05:34:18.029111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:14.947 [2024-12-07 05:34:18.029119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:14.947 [2024-12-07 05:34:18.029128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:14.947 [2024-12-07 05:34:18.029134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:14.947 [2024-12-07 05:34:18.029584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:14.947 [2024-12-07 05:34:18.029592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:14.947 [2024-12-07 05:34:18.029601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:14.947 [2024-12-07 05:34:18.029606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:14.947 [2024-12-07 05:34:18.030068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:14.947 [2024-12-07 05:34:18.030075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:14.947 [2024-12-07 05:34:18.030085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:14.947 [2024-12-07 05:34:18.030090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:14.947 passed 00:19:14.947 Test: blockdev nvme passthru rw ...passed 00:19:14.947 Test: blockdev nvme passthru vendor specific ...[2024-12-07 05:34:18.113804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:14.947 [2024-12-07 05:34:18.113814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:14.947 [2024-12-07 05:34:18.114175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:14.947 [2024-12-07 05:34:18.114182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:14.947 [2024-12-07 05:34:18.114505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:14.947 [2024-12-07 05:34:18.114512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:14.947 [2024-12-07 05:34:18.114831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:14.947 [2024-12-07 05:34:18.114838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:14.947 passed 00:19:14.947 Test: blockdev nvme admin passthru ...passed 00:19:14.947 Test: blockdev copy ...passed 00:19:14.947 00:19:14.947 Run Summary: Type Total Ran Passed Failed Inactive 00:19:14.947 suites 1 1 n/a 0 0 00:19:14.947 tests 23 23 23 0 0 00:19:14.947 asserts 152 152 152 0 n/a 00:19:14.947 00:19:14.947 Elapsed time = 1.279 seconds 00:19:15.208 05:34:18 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:15.208 05:34:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.208 05:34:18 -- common/autotest_common.sh@10 -- # set +x 00:19:15.208 05:34:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.208 05:34:18 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:15.208 05:34:18 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:15.208 05:34:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:15.208 05:34:18 -- nvmf/common.sh@116 -- # sync 00:19:15.208 05:34:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:15.208 05:34:18 -- nvmf/common.sh@119 -- # set +e 00:19:15.208 05:34:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:15.208 05:34:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:15.208 rmmod nvme_tcp 00:19:15.208 rmmod nvme_fabrics 00:19:15.208 rmmod nvme_keyring 00:19:15.208 05:34:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:15.208 05:34:18 -- nvmf/common.sh@123 -- # set -e 00:19:15.208 05:34:18 -- nvmf/common.sh@124 -- # return 0 00:19:15.208 05:34:18 -- nvmf/common.sh@477 -- # '[' -n 1836591 ']' 00:19:15.208 05:34:18 -- nvmf/common.sh@478 -- # killprocess 1836591 00:19:15.208 05:34:18 -- common/autotest_common.sh@936 -- # '[' -z 1836591 ']' 00:19:15.208 05:34:18 -- common/autotest_common.sh@940 -- # kill -0 1836591 00:19:15.208 05:34:18 -- common/autotest_common.sh@941 -- # uname 00:19:15.208 05:34:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:15.208 05:34:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1836591 00:19:15.469 05:34:18 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:19:15.469 05:34:18 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:19:15.469 05:34:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1836591' 00:19:15.469 killing process with pid 1836591 00:19:15.469 05:34:18 -- common/autotest_common.sh@955 -- # kill 1836591 00:19:15.469 05:34:18 -- common/autotest_common.sh@960 -- # wait 1836591 00:19:15.469 05:34:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:15.470 05:34:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:15.470 05:34:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:15.470 05:34:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:15.470 05:34:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:15.470 05:34:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.470 05:34:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:15.470 05:34:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.019 05:34:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:18.019 00:19:18.019 real 0m12.195s 00:19:18.019 user 0m13.706s 00:19:18.019 sys 0m6.074s 00:19:18.019 05:34:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:18.019 05:34:20 -- common/autotest_common.sh@10 -- # set +x 00:19:18.019 ************************************ 00:19:18.019 END TEST nvmf_bdevio 00:19:18.019 ************************************ 00:19:18.019 05:34:20 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:19:18.019 05:34:20 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:18.019 05:34:20 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:19:18.019 05:34:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:18.019 05:34:20 -- common/autotest_common.sh@10 -- # set +x 00:19:18.019 ************************************ 00:19:18.019 START TEST nvmf_bdevio_no_huge 00:19:18.019 ************************************ 00:19:18.019 05:34:20 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:18.019 * Looking for test storage... 00:19:18.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:18.019 05:34:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:18.019 05:34:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:18.019 05:34:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:18.019 05:34:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:18.019 05:34:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:18.019 05:34:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:18.019 05:34:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:18.019 05:34:20 -- scripts/common.sh@335 -- # IFS=.-: 00:19:18.019 05:34:20 -- scripts/common.sh@335 -- # read -ra ver1 00:19:18.020 05:34:20 -- scripts/common.sh@336 -- # IFS=.-: 00:19:18.020 05:34:20 -- scripts/common.sh@336 -- # read -ra ver2 00:19:18.020 05:34:20 -- scripts/common.sh@337 -- # local 'op=<' 00:19:18.020 05:34:20 -- scripts/common.sh@339 -- # ver1_l=2 00:19:18.020 05:34:20 -- scripts/common.sh@340 -- # ver2_l=1 00:19:18.020 05:34:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:18.020 05:34:20 -- scripts/common.sh@343 -- # case "$op" in 00:19:18.020 05:34:20 -- scripts/common.sh@344 -- # : 1 00:19:18.020 05:34:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:18.020 05:34:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:18.020 05:34:20 -- scripts/common.sh@364 -- # decimal 1 00:19:18.020 05:34:20 -- scripts/common.sh@352 -- # local d=1 00:19:18.020 05:34:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:18.020 05:34:20 -- scripts/common.sh@354 -- # echo 1 00:19:18.020 05:34:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:18.020 05:34:20 -- scripts/common.sh@365 -- # decimal 2 00:19:18.020 05:34:20 -- scripts/common.sh@352 -- # local d=2 00:19:18.020 05:34:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:18.020 05:34:20 -- scripts/common.sh@354 -- # echo 2 00:19:18.020 05:34:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:18.020 05:34:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:18.020 05:34:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:18.020 05:34:20 -- scripts/common.sh@367 -- # return 0 00:19:18.020 05:34:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:18.020 05:34:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:18.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.020 --rc genhtml_branch_coverage=1 00:19:18.020 --rc genhtml_function_coverage=1 00:19:18.020 --rc genhtml_legend=1 00:19:18.020 --rc geninfo_all_blocks=1 00:19:18.020 --rc geninfo_unexecuted_blocks=1 00:19:18.020 00:19:18.020 ' 00:19:18.020 05:34:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:18.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.020 --rc genhtml_branch_coverage=1 00:19:18.020 --rc genhtml_function_coverage=1 00:19:18.020 --rc genhtml_legend=1 00:19:18.020 --rc geninfo_all_blocks=1 00:19:18.020 --rc geninfo_unexecuted_blocks=1 00:19:18.020 00:19:18.020 ' 00:19:18.020 05:34:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:18.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.020 --rc genhtml_branch_coverage=1 00:19:18.020 --rc genhtml_function_coverage=1 00:19:18.020 --rc genhtml_legend=1 00:19:18.020 --rc geninfo_all_blocks=1 00:19:18.020 --rc geninfo_unexecuted_blocks=1 00:19:18.020 00:19:18.020 ' 00:19:18.020 05:34:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:18.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.020 --rc genhtml_branch_coverage=1 00:19:18.020 --rc genhtml_function_coverage=1 00:19:18.020 --rc genhtml_legend=1 00:19:18.020 --rc geninfo_all_blocks=1 00:19:18.020 --rc geninfo_unexecuted_blocks=1 00:19:18.020 00:19:18.020 ' 00:19:18.020 05:34:20 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:18.020 05:34:20 -- nvmf/common.sh@7 -- # uname -s 00:19:18.020 05:34:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:18.020 05:34:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:18.020 05:34:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:18.020 05:34:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:18.020 05:34:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:18.020 05:34:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:18.020 05:34:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:18.020 05:34:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:18.020 05:34:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:18.020 05:34:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:18.020 05:34:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:18.020 05:34:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:18.020 05:34:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:18.020 05:34:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:18.020 05:34:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:18.020 05:34:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:18.020 05:34:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:18.020 05:34:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:18.020 05:34:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:18.020 05:34:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.020 05:34:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.020 05:34:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.020 05:34:20 -- paths/export.sh@5 -- # export PATH 00:19:18.020 05:34:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.020 05:34:20 -- nvmf/common.sh@46 -- # : 0 00:19:18.020 05:34:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:18.020 05:34:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:18.020 05:34:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:18.020 05:34:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:18.020 05:34:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:18.020 05:34:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:18.020 05:34:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:18.020 05:34:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:18.020 05:34:20 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:18.020 05:34:20 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:18.020 05:34:20 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:18.020 05:34:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:18.020 05:34:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:18.020 05:34:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:18.020 05:34:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:18.020 05:34:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:18.020 05:34:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.020 05:34:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:18.020 05:34:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.020 05:34:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:18.020 05:34:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:18.020 05:34:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:18.020 05:34:20 -- common/autotest_common.sh@10 -- # set +x 00:19:26.163 05:34:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:26.163 05:34:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:26.163 05:34:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:26.163 05:34:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:26.163 05:34:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:26.163 05:34:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:26.163 05:34:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:26.163 05:34:27 -- nvmf/common.sh@294 -- # net_devs=() 00:19:26.163 05:34:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:26.163 05:34:27 -- nvmf/common.sh@295 -- # e810=() 00:19:26.163 05:34:27 -- nvmf/common.sh@295 -- # local -ga e810 00:19:26.163 05:34:27 -- nvmf/common.sh@296 -- # x722=() 00:19:26.163 05:34:27 -- nvmf/common.sh@296 -- # local -ga x722 00:19:26.163 05:34:27 -- nvmf/common.sh@297 -- # mlx=() 00:19:26.163 05:34:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:26.163 05:34:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:26.163 05:34:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:26.163 05:34:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:26.163 05:34:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:26.163 05:34:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:26.163 05:34:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:26.163 05:34:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:26.163 05:34:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:26.163 05:34:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:26.163 05:34:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:26.163 05:34:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:26.163 05:34:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:26.163 05:34:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:26.163 05:34:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:26.163 05:34:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:26.163 05:34:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:26.163 05:34:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:26.163 05:34:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:26.163 05:34:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:26.163 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:26.163 05:34:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:26.163 05:34:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:26.163 05:34:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.163 05:34:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.163 05:34:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:26.163 05:34:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:26.163 05:34:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:26.163 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:26.163 05:34:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:26.163 05:34:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:26.163 05:34:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.163 05:34:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.163 05:34:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:26.163 05:34:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:26.163 05:34:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:26.163 05:34:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:26.163 05:34:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:26.163 05:34:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.163 05:34:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:26.163 05:34:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.163 05:34:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:26.163 Found net devices under 0000:31:00.0: cvl_0_0 00:19:26.163 05:34:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.163 05:34:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:26.163 05:34:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.163 05:34:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:26.163 05:34:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.163 05:34:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:26.163 Found net devices under 0000:31:00.1: cvl_0_1 00:19:26.163 05:34:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.163 05:34:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:26.163 05:34:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:26.163 05:34:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:26.163 05:34:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:26.163 05:34:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:26.163 05:34:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:26.163 05:34:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:26.163 05:34:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:26.163 05:34:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:26.163 05:34:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:26.163 05:34:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:26.163 05:34:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:26.163 05:34:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:26.163 05:34:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:26.163 05:34:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:26.163 05:34:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:26.163 05:34:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:26.163 05:34:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:26.163 05:34:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:26.163 05:34:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:26.163 05:34:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:26.163 05:34:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:26.163 05:34:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:26.163 05:34:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:26.163 05:34:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:26.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:26.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:19:26.163 00:19:26.163 --- 10.0.0.2 ping statistics --- 00:19:26.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.163 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:19:26.163 05:34:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:26.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:26.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:19:26.163 00:19:26.163 --- 10.0.0.1 ping statistics --- 00:19:26.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.163 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:19:26.163 05:34:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:26.163 05:34:28 -- nvmf/common.sh@410 -- # return 0 00:19:26.163 05:34:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:26.163 05:34:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:26.163 05:34:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:26.163 05:34:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:26.163 05:34:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:26.163 05:34:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:26.163 05:34:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:26.163 05:34:28 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:26.163 05:34:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:26.163 05:34:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:26.163 05:34:28 -- common/autotest_common.sh@10 -- # set +x 00:19:26.163 05:34:28 -- nvmf/common.sh@469 -- # nvmfpid=1841362 00:19:26.163 05:34:28 -- nvmf/common.sh@470 -- # waitforlisten 1841362 00:19:26.163 05:34:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:26.163 05:34:28 -- common/autotest_common.sh@829 -- # '[' -z 1841362 ']' 00:19:26.163 05:34:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.163 05:34:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:26.163 05:34:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.163 05:34:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:26.163 05:34:28 -- common/autotest_common.sh@10 -- # set +x 00:19:26.163 [2024-12-07 05:34:28.369707] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:26.163 [2024-12-07 05:34:28.369757] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:26.163 [2024-12-07 05:34:28.457730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:26.163 [2024-12-07 05:34:28.547248] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:26.163 [2024-12-07 05:34:28.547375] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:26.163 [2024-12-07 05:34:28.547386] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:26.163 [2024-12-07 05:34:28.547395] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:26.163 [2024-12-07 05:34:28.547536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:26.163 [2024-12-07 05:34:28.547655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:26.163 [2024-12-07 05:34:28.547808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:26.163 [2024-12-07 05:34:28.547809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:26.163 05:34:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:26.163 05:34:29 -- common/autotest_common.sh@862 -- # return 0 00:19:26.163 05:34:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:26.163 05:34:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:26.163 05:34:29 -- common/autotest_common.sh@10 -- # set +x 00:19:26.163 05:34:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.163 05:34:29 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:26.163 05:34:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.163 05:34:29 -- common/autotest_common.sh@10 -- # set +x 00:19:26.163 [2024-12-07 05:34:29.224667] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:26.163 05:34:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.163 05:34:29 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:26.163 05:34:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.163 05:34:29 -- common/autotest_common.sh@10 -- # set +x 00:19:26.163 Malloc0 00:19:26.163 05:34:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.163 05:34:29 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:26.163 05:34:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.163 05:34:29 -- common/autotest_common.sh@10 -- # set +x 00:19:26.164 05:34:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.164 05:34:29 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:26.164 05:34:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.164 05:34:29 -- common/autotest_common.sh@10 -- # set +x 00:19:26.164 05:34:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.164 05:34:29 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:26.164 05:34:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.164 05:34:29 -- common/autotest_common.sh@10 -- # set +x 00:19:26.164 [2024-12-07 05:34:29.278416] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:26.164 05:34:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.164 05:34:29 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:26.164 05:34:29 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:26.164 05:34:29 -- nvmf/common.sh@520 -- # config=() 00:19:26.164 05:34:29 -- nvmf/common.sh@520 -- # local subsystem config 00:19:26.164 05:34:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:26.164 05:34:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:26.164 { 00:19:26.164 "params": { 00:19:26.164 "name": "Nvme$subsystem", 00:19:26.164 "trtype": "$TEST_TRANSPORT", 00:19:26.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:26.164 "adrfam": "ipv4", 00:19:26.164 "trsvcid": "$NVMF_PORT", 00:19:26.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:26.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:26.164 "hdgst": ${hdgst:-false}, 00:19:26.164 "ddgst": ${ddgst:-false} 00:19:26.164 }, 00:19:26.164 "method": "bdev_nvme_attach_controller" 00:19:26.164 } 00:19:26.164 EOF 00:19:26.164 )") 00:19:26.164 05:34:29 -- nvmf/common.sh@542 -- # cat 00:19:26.164 05:34:29 -- nvmf/common.sh@544 -- # jq . 00:19:26.164 05:34:29 -- nvmf/common.sh@545 -- # IFS=, 00:19:26.164 05:34:29 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:26.164 "params": { 00:19:26.164 "name": "Nvme1", 00:19:26.164 "trtype": "tcp", 00:19:26.164 "traddr": "10.0.0.2", 00:19:26.164 "adrfam": "ipv4", 00:19:26.164 "trsvcid": "4420", 00:19:26.164 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.164 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:26.164 "hdgst": false, 00:19:26.164 "ddgst": false 00:19:26.164 }, 00:19:26.164 "method": "bdev_nvme_attach_controller" 00:19:26.164 }' 00:19:26.164 [2024-12-07 05:34:29.329897] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:26.164 [2024-12-07 05:34:29.329965] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1841596 ] 00:19:26.424 [2024-12-07 05:34:29.401340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:26.424 [2024-12-07 05:34:29.497847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.424 [2024-12-07 05:34:29.497966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.424 [2024-12-07 05:34:29.497969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.684 [2024-12-07 05:34:29.800244] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:26.684 [2024-12-07 05:34:29.800269] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:26.684 I/O targets: 00:19:26.684 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:26.684 00:19:26.684 00:19:26.684 CUnit - A unit testing framework for C - Version 2.1-3 00:19:26.684 http://cunit.sourceforge.net/ 00:19:26.684 00:19:26.684 00:19:26.684 Suite: bdevio tests on: Nvme1n1 00:19:26.684 Test: blockdev write read block ...passed 00:19:26.684 Test: blockdev write zeroes read block ...passed 00:19:26.684 Test: blockdev write zeroes read no split ...passed 00:19:26.684 Test: blockdev write zeroes read split ...passed 00:19:26.944 Test: blockdev write zeroes read split partial ...passed 00:19:26.944 Test: blockdev reset ...[2024-12-07 05:34:29.976265] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:26.944 [2024-12-07 05:34:29.976324] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e5230 (9): Bad file descriptor 00:19:26.944 [2024-12-07 05:34:29.987796] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:26.944 passed 00:19:26.944 Test: blockdev write read 8 blocks ...passed 00:19:26.944 Test: blockdev write read size > 128k ...passed 00:19:26.944 Test: blockdev write read invalid size ...passed 00:19:26.944 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:26.944 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:26.944 Test: blockdev write read max offset ...passed 00:19:26.944 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:26.944 Test: blockdev writev readv 8 blocks ...passed 00:19:26.944 Test: blockdev writev readv 30 x 1block ...passed 00:19:26.944 Test: blockdev writev readv block ...passed 00:19:26.944 Test: blockdev writev readv size > 128k ...passed 00:19:26.944 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:26.944 Test: blockdev comparev and writev ...[2024-12-07 05:34:30.166219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:26.944 [2024-12-07 05:34:30.166243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:26.944 [2024-12-07 05:34:30.166254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:26.944 [2024-12-07 05:34:30.166261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.944 [2024-12-07 05:34:30.166620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:26.944 [2024-12-07 05:34:30.166628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:26.944 [2024-12-07 05:34:30.166637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:26.944 [2024-12-07 05:34:30.166643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:26.944 [2024-12-07 05:34:30.166992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:26.944 [2024-12-07 05:34:30.166999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:26.944 [2024-12-07 05:34:30.167009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:26.944 [2024-12-07 05:34:30.167018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:26.944 [2024-12-07 05:34:30.167376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:26.944 [2024-12-07 05:34:30.167387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:26.944 [2024-12-07 05:34:30.167397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:26.944 [2024-12-07 05:34:30.167403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:27.205 passed 00:19:27.205 Test: blockdev nvme passthru rw ...passed 00:19:27.205 Test: blockdev nvme passthru vendor specific ...[2024-12-07 05:34:30.250559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:27.205 [2024-12-07 05:34:30.250569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:27.205 [2024-12-07 05:34:30.250765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:27.205 [2024-12-07 05:34:30.250772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:27.205 [2024-12-07 05:34:30.250967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:27.205 [2024-12-07 05:34:30.250974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:27.205 [2024-12-07 05:34:30.251240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:27.205 [2024-12-07 05:34:30.251247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:27.205 passed 00:19:27.205 Test: blockdev nvme admin passthru ...passed 00:19:27.205 Test: blockdev copy ...passed 00:19:27.205 00:19:27.205 Run Summary: Type Total Ran Passed Failed Inactive 00:19:27.205 suites 1 1 n/a 0 0 00:19:27.205 tests 23 23 23 0 0 00:19:27.205 asserts 152 152 152 0 n/a 00:19:27.205 00:19:27.205 Elapsed time = 1.031 seconds 00:19:27.466 05:34:30 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:27.466 05:34:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.466 05:34:30 -- common/autotest_common.sh@10 -- # set +x 00:19:27.466 05:34:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.466 05:34:30 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:27.466 05:34:30 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:27.466 05:34:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:27.466 05:34:30 -- nvmf/common.sh@116 -- # sync 00:19:27.466 05:34:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:27.466 05:34:30 -- nvmf/common.sh@119 -- # set +e 00:19:27.466 05:34:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:27.466 05:34:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:27.466 rmmod nvme_tcp 00:19:27.466 rmmod nvme_fabrics 00:19:27.466 rmmod nvme_keyring 00:19:27.466 05:34:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:27.466 05:34:30 -- nvmf/common.sh@123 -- # set -e 00:19:27.466 05:34:30 -- nvmf/common.sh@124 -- # return 0 00:19:27.466 05:34:30 -- nvmf/common.sh@477 -- # '[' -n 1841362 ']' 00:19:27.466 05:34:30 -- nvmf/common.sh@478 -- # killprocess 1841362 00:19:27.466 05:34:30 -- common/autotest_common.sh@936 -- # '[' -z 1841362 ']' 00:19:27.466 05:34:30 -- common/autotest_common.sh@940 -- # kill -0 1841362 00:19:27.466 05:34:30 -- common/autotest_common.sh@941 -- # uname 00:19:27.466 05:34:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:27.466 05:34:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1841362 00:19:27.725 05:34:30 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:19:27.725 05:34:30 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:19:27.725 05:34:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1841362' 00:19:27.725 killing process with pid 1841362 00:19:27.725 05:34:30 -- common/autotest_common.sh@955 -- # kill 1841362 00:19:27.725 05:34:30 -- common/autotest_common.sh@960 -- # wait 1841362 00:19:27.984 05:34:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:27.984 05:34:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:27.984 05:34:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:27.984 05:34:31 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:27.984 05:34:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:27.984 05:34:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.984 05:34:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:27.984 05:34:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.897 05:34:33 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:29.897 00:19:29.897 real 0m12.416s 00:19:29.897 user 0m14.046s 00:19:29.897 sys 0m6.525s 00:19:29.897 05:34:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:29.897 05:34:33 -- common/autotest_common.sh@10 -- # set +x 00:19:29.897 ************************************ 00:19:29.897 END TEST nvmf_bdevio_no_huge 00:19:29.897 ************************************ 00:19:30.158 05:34:33 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:30.158 05:34:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:30.158 05:34:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:30.158 05:34:33 -- common/autotest_common.sh@10 -- # set +x 00:19:30.158 ************************************ 00:19:30.158 START TEST nvmf_tls 00:19:30.158 ************************************ 00:19:30.158 05:34:33 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:30.158 * Looking for test storage... 00:19:30.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:30.158 05:34:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:30.158 05:34:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:30.158 05:34:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:30.158 05:34:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:30.158 05:34:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:30.158 05:34:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:30.158 05:34:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:30.158 05:34:33 -- scripts/common.sh@335 -- # IFS=.-: 00:19:30.158 05:34:33 -- scripts/common.sh@335 -- # read -ra ver1 00:19:30.158 05:34:33 -- scripts/common.sh@336 -- # IFS=.-: 00:19:30.158 05:34:33 -- scripts/common.sh@336 -- # read -ra ver2 00:19:30.158 05:34:33 -- scripts/common.sh@337 -- # local 'op=<' 00:19:30.158 05:34:33 -- scripts/common.sh@339 -- # ver1_l=2 00:19:30.158 05:34:33 -- scripts/common.sh@340 -- # ver2_l=1 00:19:30.158 05:34:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:30.158 05:34:33 -- scripts/common.sh@343 -- # case "$op" in 00:19:30.158 05:34:33 -- scripts/common.sh@344 -- # : 1 00:19:30.158 05:34:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:30.158 05:34:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:30.158 05:34:33 -- scripts/common.sh@364 -- # decimal 1 00:19:30.158 05:34:33 -- scripts/common.sh@352 -- # local d=1 00:19:30.158 05:34:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:30.158 05:34:33 -- scripts/common.sh@354 -- # echo 1 00:19:30.158 05:34:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:30.158 05:34:33 -- scripts/common.sh@365 -- # decimal 2 00:19:30.158 05:34:33 -- scripts/common.sh@352 -- # local d=2 00:19:30.158 05:34:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:30.158 05:34:33 -- scripts/common.sh@354 -- # echo 2 00:19:30.159 05:34:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:30.159 05:34:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:30.159 05:34:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:30.159 05:34:33 -- scripts/common.sh@367 -- # return 0 00:19:30.159 05:34:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:30.159 05:34:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:30.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.159 --rc genhtml_branch_coverage=1 00:19:30.159 --rc genhtml_function_coverage=1 00:19:30.159 --rc genhtml_legend=1 00:19:30.159 --rc geninfo_all_blocks=1 00:19:30.159 --rc geninfo_unexecuted_blocks=1 00:19:30.159 00:19:30.159 ' 00:19:30.159 05:34:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:30.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.159 --rc genhtml_branch_coverage=1 00:19:30.159 --rc genhtml_function_coverage=1 00:19:30.159 --rc genhtml_legend=1 00:19:30.159 --rc geninfo_all_blocks=1 00:19:30.159 --rc geninfo_unexecuted_blocks=1 00:19:30.159 00:19:30.159 ' 00:19:30.159 05:34:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:30.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.159 --rc genhtml_branch_coverage=1 00:19:30.159 --rc genhtml_function_coverage=1 00:19:30.159 --rc genhtml_legend=1 00:19:30.159 --rc geninfo_all_blocks=1 00:19:30.159 --rc geninfo_unexecuted_blocks=1 00:19:30.159 00:19:30.159 ' 00:19:30.159 05:34:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:30.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.159 --rc genhtml_branch_coverage=1 00:19:30.159 --rc genhtml_function_coverage=1 00:19:30.159 --rc genhtml_legend=1 00:19:30.159 --rc geninfo_all_blocks=1 00:19:30.159 --rc geninfo_unexecuted_blocks=1 00:19:30.159 00:19:30.159 ' 00:19:30.159 05:34:33 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:30.159 05:34:33 -- nvmf/common.sh@7 -- # uname -s 00:19:30.159 05:34:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:30.159 05:34:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:30.159 05:34:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:30.159 05:34:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:30.159 05:34:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:30.159 05:34:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:30.159 05:34:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:30.159 05:34:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:30.159 05:34:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:30.159 05:34:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:30.159 05:34:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:30.159 05:34:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:30.159 05:34:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:30.159 05:34:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:30.159 05:34:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:30.159 05:34:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:30.159 05:34:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:30.159 05:34:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:30.159 05:34:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:30.159 05:34:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.159 05:34:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.159 05:34:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.159 05:34:33 -- paths/export.sh@5 -- # export PATH 00:19:30.159 05:34:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.159 05:34:33 -- nvmf/common.sh@46 -- # : 0 00:19:30.159 05:34:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:30.159 05:34:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:30.159 05:34:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:30.159 05:34:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:30.159 05:34:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:30.159 05:34:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:30.159 05:34:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:30.159 05:34:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:30.159 05:34:33 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:30.159 05:34:33 -- target/tls.sh@71 -- # nvmftestinit 00:19:30.159 05:34:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:30.159 05:34:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:30.159 05:34:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:30.159 05:34:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:30.159 05:34:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:30.159 05:34:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.159 05:34:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:30.159 05:34:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.159 05:34:33 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:30.159 05:34:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:30.159 05:34:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:30.159 05:34:33 -- common/autotest_common.sh@10 -- # set +x 00:19:38.305 05:34:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:38.305 05:34:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:38.305 05:34:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:38.305 05:34:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:38.305 05:34:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:38.305 05:34:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:38.305 05:34:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:38.305 05:34:40 -- nvmf/common.sh@294 -- # net_devs=() 00:19:38.305 05:34:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:38.305 05:34:40 -- nvmf/common.sh@295 -- # e810=() 00:19:38.305 05:34:40 -- nvmf/common.sh@295 -- # local -ga e810 00:19:38.305 05:34:40 -- nvmf/common.sh@296 -- # x722=() 00:19:38.305 05:34:40 -- nvmf/common.sh@296 -- # local -ga x722 00:19:38.305 05:34:40 -- nvmf/common.sh@297 -- # mlx=() 00:19:38.305 05:34:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:38.305 05:34:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:38.305 05:34:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:38.305 05:34:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:38.305 05:34:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:38.305 05:34:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:38.305 05:34:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:38.305 05:34:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:38.305 05:34:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:38.305 05:34:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:38.305 05:34:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:38.305 05:34:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:38.305 05:34:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:38.305 05:34:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:38.305 05:34:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:38.305 05:34:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:38.305 05:34:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:38.305 05:34:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:38.305 05:34:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:38.305 05:34:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:38.306 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:38.306 05:34:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:38.306 05:34:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:38.306 05:34:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:38.306 05:34:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:38.306 05:34:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:38.306 05:34:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:38.306 05:34:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:38.306 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:38.306 05:34:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:38.306 05:34:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:38.306 05:34:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:38.306 05:34:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:38.306 05:34:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:38.306 05:34:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:38.306 05:34:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:38.306 05:34:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:38.306 05:34:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:38.306 05:34:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:38.306 05:34:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:38.306 05:34:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:38.306 05:34:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:38.306 Found net devices under 0000:31:00.0: cvl_0_0 00:19:38.306 05:34:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:38.306 05:34:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:38.306 05:34:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:38.306 05:34:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:38.306 05:34:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:38.306 05:34:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:38.306 Found net devices under 0000:31:00.1: cvl_0_1 00:19:38.306 05:34:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:38.306 05:34:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:38.306 05:34:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:38.306 05:34:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:38.306 05:34:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:38.306 05:34:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:38.306 05:34:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:38.306 05:34:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:38.306 05:34:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:38.306 05:34:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:38.306 05:34:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:38.306 05:34:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:38.306 05:34:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:38.306 05:34:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:38.306 05:34:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:38.306 05:34:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:38.306 05:34:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:38.306 05:34:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:38.306 05:34:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:38.306 05:34:40 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:38.306 05:34:40 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:38.306 05:34:40 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:38.306 05:34:40 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:38.306 05:34:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:38.306 05:34:40 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:38.306 05:34:40 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:38.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:38.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:19:38.306 00:19:38.306 --- 10.0.0.2 ping statistics --- 00:19:38.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.306 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:19:38.306 05:34:40 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:38.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:38.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:19:38.306 00:19:38.306 --- 10.0.0.1 ping statistics --- 00:19:38.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.306 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:19:38.306 05:34:40 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:38.306 05:34:40 -- nvmf/common.sh@410 -- # return 0 00:19:38.306 05:34:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:38.306 05:34:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:38.306 05:34:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:38.306 05:34:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:38.306 05:34:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:38.306 05:34:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:38.306 05:34:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:38.306 05:34:40 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:38.306 05:34:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:38.306 05:34:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:38.306 05:34:40 -- common/autotest_common.sh@10 -- # set +x 00:19:38.306 05:34:40 -- nvmf/common.sh@469 -- # nvmfpid=1846147 00:19:38.306 05:34:40 -- nvmf/common.sh@470 -- # waitforlisten 1846147 00:19:38.306 05:34:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:38.306 05:34:40 -- common/autotest_common.sh@829 -- # '[' -z 1846147 ']' 00:19:38.306 05:34:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.306 05:34:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:38.306 05:34:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.306 05:34:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:38.306 05:34:40 -- common/autotest_common.sh@10 -- # set +x 00:19:38.306 [2024-12-07 05:34:40.811485] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:38.306 [2024-12-07 05:34:40.811534] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.306 EAL: No free 2048 kB hugepages reported on node 1 00:19:38.306 [2024-12-07 05:34:40.895800] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.306 [2024-12-07 05:34:40.958396] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:38.306 [2024-12-07 05:34:40.958513] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:38.306 [2024-12-07 05:34:40.958521] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:38.306 [2024-12-07 05:34:40.958529] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:38.306 [2024-12-07 05:34:40.958548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:38.568 05:34:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:38.568 05:34:41 -- common/autotest_common.sh@862 -- # return 0 00:19:38.568 05:34:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:38.568 05:34:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:38.568 05:34:41 -- common/autotest_common.sh@10 -- # set +x 00:19:38.568 05:34:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:38.568 05:34:41 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:19:38.568 05:34:41 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:38.568 true 00:19:38.568 05:34:41 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:38.568 05:34:41 -- target/tls.sh@82 -- # jq -r .tls_version 00:19:38.830 05:34:41 -- target/tls.sh@82 -- # version=0 00:19:38.830 05:34:41 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:19:38.830 05:34:41 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:39.090 05:34:42 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:39.090 05:34:42 -- target/tls.sh@90 -- # jq -r .tls_version 00:19:39.090 05:34:42 -- target/tls.sh@90 -- # version=13 00:19:39.090 05:34:42 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:19:39.090 05:34:42 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:39.352 05:34:42 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:39.352 05:34:42 -- target/tls.sh@98 -- # jq -r .tls_version 00:19:39.352 05:34:42 -- target/tls.sh@98 -- # version=7 00:19:39.352 05:34:42 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:19:39.352 05:34:42 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:39.352 05:34:42 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:39.613 05:34:42 -- target/tls.sh@105 -- # ktls=false 00:19:39.613 05:34:42 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:19:39.613 05:34:42 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:39.873 05:34:42 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:39.873 05:34:42 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:39.873 05:34:43 -- target/tls.sh@113 -- # ktls=true 00:19:39.873 05:34:43 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:19:39.873 05:34:43 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:40.134 05:34:43 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:40.134 05:34:43 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:19:40.394 05:34:43 -- target/tls.sh@121 -- # ktls=false 00:19:40.394 05:34:43 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:19:40.395 05:34:43 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:19:40.395 05:34:43 -- target/tls.sh@49 -- # local key hash crc 00:19:40.395 05:34:43 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:19:40.395 05:34:43 -- target/tls.sh@51 -- # hash=01 00:19:40.395 05:34:43 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:19:40.395 05:34:43 -- target/tls.sh@52 -- # gzip -1 -c 00:19:40.395 05:34:43 -- target/tls.sh@52 -- # tail -c8 00:19:40.395 05:34:43 -- target/tls.sh@52 -- # head -c 4 00:19:40.395 05:34:43 -- target/tls.sh@52 -- # crc='p$H�' 00:19:40.395 05:34:43 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:19:40.395 05:34:43 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:19:40.395 05:34:43 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:40.395 05:34:43 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:40.395 05:34:43 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:19:40.395 05:34:43 -- target/tls.sh@49 -- # local key hash crc 00:19:40.395 05:34:43 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:19:40.395 05:34:43 -- target/tls.sh@51 -- # hash=01 00:19:40.395 05:34:43 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:19:40.395 05:34:43 -- target/tls.sh@52 -- # gzip -1 -c 00:19:40.395 05:34:43 -- target/tls.sh@52 -- # tail -c8 00:19:40.395 05:34:43 -- target/tls.sh@52 -- # head -c 4 00:19:40.395 05:34:43 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:19:40.395 05:34:43 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:19:40.395 05:34:43 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:19:40.395 05:34:43 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:40.395 05:34:43 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:40.395 05:34:43 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:40.395 05:34:43 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:40.395 05:34:43 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:40.395 05:34:43 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:40.395 05:34:43 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:40.395 05:34:43 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:40.395 05:34:43 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:40.395 05:34:43 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:40.655 05:34:43 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:40.655 05:34:43 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:40.655 05:34:43 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:40.914 [2024-12-07 05:34:44.016652] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:40.914 05:34:44 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:41.174 05:34:44 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:41.174 [2024-12-07 05:34:44.345456] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:41.174 [2024-12-07 05:34:44.345672] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:41.174 05:34:44 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:41.433 malloc0 00:19:41.434 05:34:44 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:41.695 05:34:44 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:41.695 05:34:44 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:41.695 EAL: No free 2048 kB hugepages reported on node 1 00:19:53.928 Initializing NVMe Controllers 00:19:53.928 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:53.928 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:53.928 Initialization complete. Launching workers. 00:19:53.928 ======================================================== 00:19:53.928 Latency(us) 00:19:53.928 Device Information : IOPS MiB/s Average min max 00:19:53.928 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19330.32 75.51 3310.84 1116.11 4083.98 00:19:53.928 ======================================================== 00:19:53.928 Total : 19330.32 75.51 3310.84 1116.11 4083.98 00:19:53.928 00:19:53.928 05:34:54 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:53.928 05:34:54 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:53.928 05:34:54 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:53.928 05:34:54 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:53.928 05:34:54 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:19:53.928 05:34:54 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:53.928 05:34:54 -- target/tls.sh@28 -- # bdevperf_pid=1848942 00:19:53.928 05:34:54 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:53.928 05:34:54 -- target/tls.sh@31 -- # waitforlisten 1848942 /var/tmp/bdevperf.sock 00:19:53.928 05:34:54 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:53.928 05:34:54 -- common/autotest_common.sh@829 -- # '[' -z 1848942 ']' 00:19:53.928 05:34:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:53.928 05:34:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:53.928 05:34:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:53.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:53.928 05:34:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:53.928 05:34:54 -- common/autotest_common.sh@10 -- # set +x 00:19:53.928 [2024-12-07 05:34:54.984108] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:53.928 [2024-12-07 05:34:54.984164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1848942 ] 00:19:53.928 EAL: No free 2048 kB hugepages reported on node 1 00:19:53.928 [2024-12-07 05:34:55.035228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.928 [2024-12-07 05:34:55.086224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.928 05:34:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:53.928 05:34:55 -- common/autotest_common.sh@862 -- # return 0 00:19:53.928 05:34:55 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:53.928 [2024-12-07 05:34:55.959582] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:53.928 TLSTESTn1 00:19:53.928 05:34:56 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:53.928 Running I/O for 10 seconds... 00:20:03.926 00:20:03.926 Latency(us) 00:20:03.927 [2024-12-07T04:35:07.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.927 [2024-12-07T04:35:07.167Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:03.927 Verification LBA range: start 0x0 length 0x2000 00:20:03.927 TLSTESTn1 : 10.01 6749.80 26.37 0.00 0.00 18943.54 3345.07 49370.45 00:20:03.927 [2024-12-07T04:35:07.167Z] =================================================================================================================== 00:20:03.927 [2024-12-07T04:35:07.167Z] Total : 6749.80 26.37 0.00 0.00 18943.54 3345.07 49370.45 00:20:03.927 0 00:20:03.927 05:35:06 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:03.927 05:35:06 -- target/tls.sh@45 -- # killprocess 1848942 00:20:03.927 05:35:06 -- common/autotest_common.sh@936 -- # '[' -z 1848942 ']' 00:20:03.927 05:35:06 -- common/autotest_common.sh@940 -- # kill -0 1848942 00:20:03.927 05:35:06 -- common/autotest_common.sh@941 -- # uname 00:20:03.927 05:35:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:03.927 05:35:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1848942 00:20:03.927 05:35:06 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:03.927 05:35:06 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:03.927 05:35:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1848942' 00:20:03.927 killing process with pid 1848942 00:20:03.927 05:35:06 -- common/autotest_common.sh@955 -- # kill 1848942 00:20:03.927 Received shutdown signal, test time was about 10.000000 seconds 00:20:03.927 00:20:03.927 Latency(us) 00:20:03.927 [2024-12-07T04:35:07.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.927 [2024-12-07T04:35:07.167Z] =================================================================================================================== 00:20:03.927 [2024-12-07T04:35:07.167Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:03.927 05:35:06 -- common/autotest_common.sh@960 -- # wait 1848942 00:20:03.927 05:35:06 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:03.927 05:35:06 -- common/autotest_common.sh@650 -- # local es=0 00:20:03.927 05:35:06 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:03.927 05:35:06 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:03.927 05:35:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:03.927 05:35:06 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:03.927 05:35:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:03.927 05:35:06 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:03.927 05:35:06 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:03.927 05:35:06 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:03.927 05:35:06 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:03.927 05:35:06 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:20:03.927 05:35:06 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:03.927 05:35:06 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:03.927 05:35:06 -- target/tls.sh@28 -- # bdevperf_pid=1851285 00:20:03.927 05:35:06 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:03.927 05:35:06 -- target/tls.sh@31 -- # waitforlisten 1851285 /var/tmp/bdevperf.sock 00:20:03.927 05:35:06 -- common/autotest_common.sh@829 -- # '[' -z 1851285 ']' 00:20:03.927 05:35:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:03.927 05:35:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:03.927 05:35:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:03.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:03.927 05:35:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:03.927 05:35:06 -- common/autotest_common.sh@10 -- # set +x 00:20:03.927 [2024-12-07 05:35:06.386634] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:03.927 [2024-12-07 05:35:06.386690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1851285 ] 00:20:03.927 EAL: No free 2048 kB hugepages reported on node 1 00:20:03.927 [2024-12-07 05:35:06.453797] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.927 [2024-12-07 05:35:06.522686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.188 05:35:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:04.188 05:35:07 -- common/autotest_common.sh@862 -- # return 0 00:20:04.188 05:35:07 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:04.188 [2024-12-07 05:35:07.398887] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:04.188 [2024-12-07 05:35:07.407204] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:04.188 [2024-12-07 05:35:07.407706] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10db610 (107): Transport endpoint is not connected 00:20:04.188 [2024-12-07 05:35:07.408702] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10db610 (9): Bad file descriptor 00:20:04.188 [2024-12-07 05:35:07.409703] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:04.188 [2024-12-07 05:35:07.409710] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:04.188 [2024-12-07 05:35:07.409715] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:04.188 request: 00:20:04.188 { 00:20:04.188 "name": "TLSTEST", 00:20:04.188 "trtype": "tcp", 00:20:04.188 "traddr": "10.0.0.2", 00:20:04.188 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:04.188 "adrfam": "ipv4", 00:20:04.188 "trsvcid": "4420", 00:20:04.188 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.188 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:20:04.188 "method": "bdev_nvme_attach_controller", 00:20:04.188 "req_id": 1 00:20:04.188 } 00:20:04.188 Got JSON-RPC error response 00:20:04.188 response: 00:20:04.188 { 00:20:04.188 "code": -32602, 00:20:04.188 "message": "Invalid parameters" 00:20:04.188 } 00:20:04.188 05:35:07 -- target/tls.sh@36 -- # killprocess 1851285 00:20:04.188 05:35:07 -- common/autotest_common.sh@936 -- # '[' -z 1851285 ']' 00:20:04.188 05:35:07 -- common/autotest_common.sh@940 -- # kill -0 1851285 00:20:04.188 05:35:07 -- common/autotest_common.sh@941 -- # uname 00:20:04.449 05:35:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:04.449 05:35:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1851285 00:20:04.449 05:35:07 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:04.449 05:35:07 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:04.449 05:35:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1851285' 00:20:04.449 killing process with pid 1851285 00:20:04.449 05:35:07 -- common/autotest_common.sh@955 -- # kill 1851285 00:20:04.449 Received shutdown signal, test time was about 10.000000 seconds 00:20:04.449 00:20:04.449 Latency(us) 00:20:04.449 [2024-12-07T04:35:07.689Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.449 [2024-12-07T04:35:07.689Z] =================================================================================================================== 00:20:04.449 [2024-12-07T04:35:07.689Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:04.449 05:35:07 -- common/autotest_common.sh@960 -- # wait 1851285 00:20:04.449 05:35:07 -- target/tls.sh@37 -- # return 1 00:20:04.449 05:35:07 -- common/autotest_common.sh@653 -- # es=1 00:20:04.449 05:35:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:04.449 05:35:07 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:04.449 05:35:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:04.449 05:35:07 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:04.449 05:35:07 -- common/autotest_common.sh@650 -- # local es=0 00:20:04.449 05:35:07 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:04.449 05:35:07 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:04.449 05:35:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:04.449 05:35:07 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:04.449 05:35:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:04.449 05:35:07 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:04.449 05:35:07 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:04.449 05:35:07 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:04.449 05:35:07 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:04.449 05:35:07 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:04.449 05:35:07 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:04.449 05:35:07 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:04.449 05:35:07 -- target/tls.sh@28 -- # bdevperf_pid=1851586 00:20:04.449 05:35:07 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:04.449 05:35:07 -- target/tls.sh@31 -- # waitforlisten 1851586 /var/tmp/bdevperf.sock 00:20:04.449 05:35:07 -- common/autotest_common.sh@829 -- # '[' -z 1851586 ']' 00:20:04.449 05:35:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:04.449 05:35:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:04.449 05:35:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:04.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:04.449 05:35:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:04.449 05:35:07 -- common/autotest_common.sh@10 -- # set +x 00:20:04.449 [2024-12-07 05:35:07.627330] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:04.449 [2024-12-07 05:35:07.627386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1851586 ] 00:20:04.449 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.735 [2024-12-07 05:35:07.694626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.735 [2024-12-07 05:35:07.763158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:05.404 05:35:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:05.404 05:35:08 -- common/autotest_common.sh@862 -- # return 0 00:20:05.404 05:35:08 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:05.666 [2024-12-07 05:35:08.639100] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:05.666 [2024-12-07 05:35:08.650129] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:05.666 [2024-12-07 05:35:08.650149] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:05.666 [2024-12-07 05:35:08.650168] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:05.666 [2024-12-07 05:35:08.651055] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b78610 (107): Transport endpoint is not connected 00:20:05.666 [2024-12-07 05:35:08.652051] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b78610 (9): Bad file descriptor 00:20:05.666 [2024-12-07 05:35:08.653052] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:05.666 [2024-12-07 05:35:08.653059] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:05.667 [2024-12-07 05:35:08.653065] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:05.667 request: 00:20:05.667 { 00:20:05.667 "name": "TLSTEST", 00:20:05.667 "trtype": "tcp", 00:20:05.667 "traddr": "10.0.0.2", 00:20:05.667 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:05.667 "adrfam": "ipv4", 00:20:05.667 "trsvcid": "4420", 00:20:05.667 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.667 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:20:05.667 "method": "bdev_nvme_attach_controller", 00:20:05.667 "req_id": 1 00:20:05.667 } 00:20:05.667 Got JSON-RPC error response 00:20:05.667 response: 00:20:05.667 { 00:20:05.667 "code": -32602, 00:20:05.667 "message": "Invalid parameters" 00:20:05.667 } 00:20:05.667 05:35:08 -- target/tls.sh@36 -- # killprocess 1851586 00:20:05.667 05:35:08 -- common/autotest_common.sh@936 -- # '[' -z 1851586 ']' 00:20:05.667 05:35:08 -- common/autotest_common.sh@940 -- # kill -0 1851586 00:20:05.667 05:35:08 -- common/autotest_common.sh@941 -- # uname 00:20:05.667 05:35:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:05.667 05:35:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1851586 00:20:05.667 05:35:08 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:05.667 05:35:08 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:05.667 05:35:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1851586' 00:20:05.667 killing process with pid 1851586 00:20:05.667 05:35:08 -- common/autotest_common.sh@955 -- # kill 1851586 00:20:05.667 Received shutdown signal, test time was about 10.000000 seconds 00:20:05.667 00:20:05.667 Latency(us) 00:20:05.667 [2024-12-07T04:35:08.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.667 [2024-12-07T04:35:08.907Z] =================================================================================================================== 00:20:05.667 [2024-12-07T04:35:08.907Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:05.667 05:35:08 -- common/autotest_common.sh@960 -- # wait 1851586 00:20:05.667 05:35:08 -- target/tls.sh@37 -- # return 1 00:20:05.667 05:35:08 -- common/autotest_common.sh@653 -- # es=1 00:20:05.667 05:35:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:05.667 05:35:08 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:05.667 05:35:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:05.667 05:35:08 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:05.667 05:35:08 -- common/autotest_common.sh@650 -- # local es=0 00:20:05.667 05:35:08 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:05.667 05:35:08 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:05.667 05:35:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.667 05:35:08 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:05.667 05:35:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.667 05:35:08 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:05.667 05:35:08 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:05.667 05:35:08 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:05.667 05:35:08 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:05.667 05:35:08 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:05.667 05:35:08 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:05.667 05:35:08 -- target/tls.sh@28 -- # bdevperf_pid=1851705 00:20:05.667 05:35:08 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:05.667 05:35:08 -- target/tls.sh@31 -- # waitforlisten 1851705 /var/tmp/bdevperf.sock 00:20:05.667 05:35:08 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:05.667 05:35:08 -- common/autotest_common.sh@829 -- # '[' -z 1851705 ']' 00:20:05.667 05:35:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:05.667 05:35:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:05.667 05:35:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:05.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:05.667 05:35:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:05.667 05:35:08 -- common/autotest_common.sh@10 -- # set +x 00:20:05.667 [2024-12-07 05:35:08.887336] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:05.667 [2024-12-07 05:35:08.887394] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1851705 ] 00:20:05.929 EAL: No free 2048 kB hugepages reported on node 1 00:20:05.929 [2024-12-07 05:35:08.938401] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.929 [2024-12-07 05:35:08.988614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:06.501 05:35:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:06.501 05:35:09 -- common/autotest_common.sh@862 -- # return 0 00:20:06.501 05:35:09 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:06.762 [2024-12-07 05:35:09.861622] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:06.762 [2024-12-07 05:35:09.865821] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:06.762 [2024-12-07 05:35:09.865839] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:06.762 [2024-12-07 05:35:09.865858] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:06.762 [2024-12-07 05:35:09.866517] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b7610 (107): Transport endpoint is not connected 00:20:06.763 [2024-12-07 05:35:09.867512] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b7610 (9): Bad file descriptor 00:20:06.763 [2024-12-07 05:35:09.868514] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:06.763 [2024-12-07 05:35:09.868520] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:06.763 [2024-12-07 05:35:09.868526] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:06.763 request: 00:20:06.763 { 00:20:06.763 "name": "TLSTEST", 00:20:06.763 "trtype": "tcp", 00:20:06.763 "traddr": "10.0.0.2", 00:20:06.763 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:06.763 "adrfam": "ipv4", 00:20:06.763 "trsvcid": "4420", 00:20:06.763 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:06.763 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:20:06.763 "method": "bdev_nvme_attach_controller", 00:20:06.763 "req_id": 1 00:20:06.763 } 00:20:06.763 Got JSON-RPC error response 00:20:06.763 response: 00:20:06.763 { 00:20:06.763 "code": -32602, 00:20:06.763 "message": "Invalid parameters" 00:20:06.763 } 00:20:06.763 05:35:09 -- target/tls.sh@36 -- # killprocess 1851705 00:20:06.763 05:35:09 -- common/autotest_common.sh@936 -- # '[' -z 1851705 ']' 00:20:06.763 05:35:09 -- common/autotest_common.sh@940 -- # kill -0 1851705 00:20:06.763 05:35:09 -- common/autotest_common.sh@941 -- # uname 00:20:06.763 05:35:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:06.763 05:35:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1851705 00:20:06.763 05:35:09 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:06.763 05:35:09 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:06.763 05:35:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1851705' 00:20:06.763 killing process with pid 1851705 00:20:06.763 05:35:09 -- common/autotest_common.sh@955 -- # kill 1851705 00:20:06.763 Received shutdown signal, test time was about 10.000000 seconds 00:20:06.763 00:20:06.763 Latency(us) 00:20:06.763 [2024-12-07T04:35:10.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.763 [2024-12-07T04:35:10.003Z] =================================================================================================================== 00:20:06.763 [2024-12-07T04:35:10.003Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:06.763 05:35:09 -- common/autotest_common.sh@960 -- # wait 1851705 00:20:07.025 05:35:10 -- target/tls.sh@37 -- # return 1 00:20:07.025 05:35:10 -- common/autotest_common.sh@653 -- # es=1 00:20:07.025 05:35:10 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:07.025 05:35:10 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:07.025 05:35:10 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:07.025 05:35:10 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:07.025 05:35:10 -- common/autotest_common.sh@650 -- # local es=0 00:20:07.025 05:35:10 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:07.025 05:35:10 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:07.025 05:35:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:07.025 05:35:10 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:07.025 05:35:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:07.025 05:35:10 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:07.025 05:35:10 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:07.025 05:35:10 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:07.025 05:35:10 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:07.025 05:35:10 -- target/tls.sh@23 -- # psk= 00:20:07.025 05:35:10 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:07.025 05:35:10 -- target/tls.sh@28 -- # bdevperf_pid=1852003 00:20:07.025 05:35:10 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:07.025 05:35:10 -- target/tls.sh@31 -- # waitforlisten 1852003 /var/tmp/bdevperf.sock 00:20:07.025 05:35:10 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:07.025 05:35:10 -- common/autotest_common.sh@829 -- # '[' -z 1852003 ']' 00:20:07.025 05:35:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:07.025 05:35:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:07.025 05:35:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:07.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:07.025 05:35:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:07.025 05:35:10 -- common/autotest_common.sh@10 -- # set +x 00:20:07.025 [2024-12-07 05:35:10.130271] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:07.025 [2024-12-07 05:35:10.130342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1852003 ] 00:20:07.025 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.025 [2024-12-07 05:35:10.182968] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.025 [2024-12-07 05:35:10.233835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.971 05:35:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:07.971 05:35:10 -- common/autotest_common.sh@862 -- # return 0 00:20:07.971 05:35:10 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:07.971 [2024-12-07 05:35:11.046004] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:07.971 [2024-12-07 05:35:11.047262] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1956720 (9): Bad file descriptor 00:20:07.971 [2024-12-07 05:35:11.048261] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:07.971 [2024-12-07 05:35:11.048269] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:07.971 [2024-12-07 05:35:11.048274] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:07.971 request: 00:20:07.971 { 00:20:07.971 "name": "TLSTEST", 00:20:07.971 "trtype": "tcp", 00:20:07.971 "traddr": "10.0.0.2", 00:20:07.971 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:07.971 "adrfam": "ipv4", 00:20:07.971 "trsvcid": "4420", 00:20:07.971 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.971 "method": "bdev_nvme_attach_controller", 00:20:07.971 "req_id": 1 00:20:07.971 } 00:20:07.971 Got JSON-RPC error response 00:20:07.971 response: 00:20:07.971 { 00:20:07.971 "code": -32602, 00:20:07.971 "message": "Invalid parameters" 00:20:07.971 } 00:20:07.971 05:35:11 -- target/tls.sh@36 -- # killprocess 1852003 00:20:07.971 05:35:11 -- common/autotest_common.sh@936 -- # '[' -z 1852003 ']' 00:20:07.971 05:35:11 -- common/autotest_common.sh@940 -- # kill -0 1852003 00:20:07.971 05:35:11 -- common/autotest_common.sh@941 -- # uname 00:20:07.971 05:35:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:07.971 05:35:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1852003 00:20:07.971 05:35:11 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:07.971 05:35:11 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:07.971 05:35:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1852003' 00:20:07.971 killing process with pid 1852003 00:20:07.971 05:35:11 -- common/autotest_common.sh@955 -- # kill 1852003 00:20:07.971 Received shutdown signal, test time was about 10.000000 seconds 00:20:07.971 00:20:07.971 Latency(us) 00:20:07.971 [2024-12-07T04:35:11.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.971 [2024-12-07T04:35:11.211Z] =================================================================================================================== 00:20:07.971 [2024-12-07T04:35:11.211Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:07.971 05:35:11 -- common/autotest_common.sh@960 -- # wait 1852003 00:20:08.232 05:35:11 -- target/tls.sh@37 -- # return 1 00:20:08.232 05:35:11 -- common/autotest_common.sh@653 -- # es=1 00:20:08.232 05:35:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:08.232 05:35:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:08.232 05:35:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:08.232 05:35:11 -- target/tls.sh@167 -- # killprocess 1846147 00:20:08.232 05:35:11 -- common/autotest_common.sh@936 -- # '[' -z 1846147 ']' 00:20:08.232 05:35:11 -- common/autotest_common.sh@940 -- # kill -0 1846147 00:20:08.232 05:35:11 -- common/autotest_common.sh@941 -- # uname 00:20:08.232 05:35:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:08.232 05:35:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1846147 00:20:08.232 05:35:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:08.232 05:35:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:08.232 05:35:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1846147' 00:20:08.232 killing process with pid 1846147 00:20:08.232 05:35:11 -- common/autotest_common.sh@955 -- # kill 1846147 00:20:08.232 05:35:11 -- common/autotest_common.sh@960 -- # wait 1846147 00:20:08.232 05:35:11 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:20:08.232 05:35:11 -- target/tls.sh@49 -- # local key hash crc 00:20:08.232 05:35:11 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:08.232 05:35:11 -- target/tls.sh@51 -- # hash=02 00:20:08.232 05:35:11 -- target/tls.sh@52 -- # gzip -1 -c 00:20:08.232 05:35:11 -- target/tls.sh@52 -- # head -c 4 00:20:08.232 05:35:11 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:20:08.232 05:35:11 -- target/tls.sh@52 -- # tail -c8 00:20:08.232 05:35:11 -- target/tls.sh@52 -- # crc='�e�'\''' 00:20:08.232 05:35:11 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:20:08.232 05:35:11 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:20:08.232 05:35:11 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:08.232 05:35:11 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:08.232 05:35:11 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:08.232 05:35:11 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:08.232 05:35:11 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:08.232 05:35:11 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:20:08.232 05:35:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:08.232 05:35:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:08.232 05:35:11 -- common/autotest_common.sh@10 -- # set +x 00:20:08.232 05:35:11 -- nvmf/common.sh@469 -- # nvmfpid=1852361 00:20:08.232 05:35:11 -- nvmf/common.sh@470 -- # waitforlisten 1852361 00:20:08.232 05:35:11 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:08.232 05:35:11 -- common/autotest_common.sh@829 -- # '[' -z 1852361 ']' 00:20:08.232 05:35:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.232 05:35:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:08.232 05:35:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.232 05:35:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:08.232 05:35:11 -- common/autotest_common.sh@10 -- # set +x 00:20:08.492 [2024-12-07 05:35:11.512497] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:08.492 [2024-12-07 05:35:11.512552] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.492 EAL: No free 2048 kB hugepages reported on node 1 00:20:08.493 [2024-12-07 05:35:11.595696] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.493 [2024-12-07 05:35:11.648258] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:08.493 [2024-12-07 05:35:11.648350] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.493 [2024-12-07 05:35:11.648355] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.493 [2024-12-07 05:35:11.648360] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.493 [2024-12-07 05:35:11.648374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.063 05:35:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:09.063 05:35:12 -- common/autotest_common.sh@862 -- # return 0 00:20:09.063 05:35:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:09.063 05:35:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:09.063 05:35:12 -- common/autotest_common.sh@10 -- # set +x 00:20:09.324 05:35:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.324 05:35:12 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:09.324 05:35:12 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:09.324 05:35:12 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:09.324 [2024-12-07 05:35:12.462589] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.324 05:35:12 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:09.585 05:35:12 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:09.585 [2024-12-07 05:35:12.823484] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:09.846 [2024-12-07 05:35:12.823683] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.846 05:35:12 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:09.846 malloc0 00:20:09.846 05:35:12 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:10.107 05:35:13 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:10.367 05:35:13 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:10.367 05:35:13 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:10.367 05:35:13 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:10.367 05:35:13 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:10.367 05:35:13 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:20:10.367 05:35:13 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:10.367 05:35:13 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:10.367 05:35:13 -- target/tls.sh@28 -- # bdevperf_pid=1852729 00:20:10.367 05:35:13 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:10.367 05:35:13 -- target/tls.sh@31 -- # waitforlisten 1852729 /var/tmp/bdevperf.sock 00:20:10.367 05:35:13 -- common/autotest_common.sh@829 -- # '[' -z 1852729 ']' 00:20:10.367 05:35:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:10.367 05:35:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:10.367 05:35:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:10.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:10.367 05:35:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:10.367 05:35:13 -- common/autotest_common.sh@10 -- # set +x 00:20:10.367 [2024-12-07 05:35:13.386382] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:10.367 [2024-12-07 05:35:13.386435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1852729 ] 00:20:10.367 EAL: No free 2048 kB hugepages reported on node 1 00:20:10.368 [2024-12-07 05:35:13.439507] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.368 [2024-12-07 05:35:13.490441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:10.368 05:35:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:10.368 05:35:13 -- common/autotest_common.sh@862 -- # return 0 00:20:10.368 05:35:13 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:10.628 [2024-12-07 05:35:13.705906] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:10.629 TLSTESTn1 00:20:10.629 05:35:13 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:10.890 Running I/O for 10 seconds... 00:20:20.891 00:20:20.891 Latency(us) 00:20:20.891 [2024-12-07T04:35:24.131Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.891 [2024-12-07T04:35:24.131Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:20.891 Verification LBA range: start 0x0 length 0x2000 00:20:20.891 TLSTESTn1 : 10.01 6053.34 23.65 0.00 0.00 21127.40 2457.60 55487.15 00:20:20.891 [2024-12-07T04:35:24.131Z] =================================================================================================================== 00:20:20.891 [2024-12-07T04:35:24.131Z] Total : 6053.34 23.65 0.00 0.00 21127.40 2457.60 55487.15 00:20:20.891 0 00:20:20.891 05:35:23 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:20.891 05:35:23 -- target/tls.sh@45 -- # killprocess 1852729 00:20:20.891 05:35:23 -- common/autotest_common.sh@936 -- # '[' -z 1852729 ']' 00:20:20.891 05:35:23 -- common/autotest_common.sh@940 -- # kill -0 1852729 00:20:20.891 05:35:23 -- common/autotest_common.sh@941 -- # uname 00:20:20.891 05:35:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:20.891 05:35:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1852729 00:20:20.891 05:35:24 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:20.891 05:35:24 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:20.891 05:35:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1852729' 00:20:20.891 killing process with pid 1852729 00:20:20.891 05:35:24 -- common/autotest_common.sh@955 -- # kill 1852729 00:20:20.891 Received shutdown signal, test time was about 10.000000 seconds 00:20:20.891 00:20:20.891 Latency(us) 00:20:20.891 [2024-12-07T04:35:24.131Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.891 [2024-12-07T04:35:24.131Z] =================================================================================================================== 00:20:20.891 [2024-12-07T04:35:24.131Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:20.891 05:35:24 -- common/autotest_common.sh@960 -- # wait 1852729 00:20:20.891 05:35:24 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:20.891 05:35:24 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:20.891 05:35:24 -- common/autotest_common.sh@650 -- # local es=0 00:20:20.891 05:35:24 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:20.891 05:35:24 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:20.891 05:35:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:20.891 05:35:24 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:20.891 05:35:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:20.891 05:35:24 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:20.891 05:35:24 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:20.891 05:35:24 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:20.891 05:35:24 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:20.891 05:35:24 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:20:20.891 05:35:24 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:20.891 05:35:24 -- target/tls.sh@28 -- # bdevperf_pid=1854771 00:20:20.891 05:35:24 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:20.891 05:35:24 -- target/tls.sh@31 -- # waitforlisten 1854771 /var/tmp/bdevperf.sock 00:20:20.891 05:35:24 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:20.891 05:35:24 -- common/autotest_common.sh@829 -- # '[' -z 1854771 ']' 00:20:20.891 05:35:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:20.891 05:35:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:20.891 05:35:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:20.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:20.891 05:35:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:20.891 05:35:24 -- common/autotest_common.sh@10 -- # set +x 00:20:21.151 [2024-12-07 05:35:24.167549] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:21.151 [2024-12-07 05:35:24.167606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1854771 ] 00:20:21.151 EAL: No free 2048 kB hugepages reported on node 1 00:20:21.151 [2024-12-07 05:35:24.218685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.151 [2024-12-07 05:35:24.268571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.093 05:35:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:22.093 05:35:24 -- common/autotest_common.sh@862 -- # return 0 00:20:22.093 05:35:24 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:22.093 [2024-12-07 05:35:25.129884] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:22.093 [2024-12-07 05:35:25.129912] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:22.093 request: 00:20:22.093 { 00:20:22.093 "name": "TLSTEST", 00:20:22.093 "trtype": "tcp", 00:20:22.093 "traddr": "10.0.0.2", 00:20:22.093 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:22.093 "adrfam": "ipv4", 00:20:22.093 "trsvcid": "4420", 00:20:22.093 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.093 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:22.093 "method": "bdev_nvme_attach_controller", 00:20:22.093 "req_id": 1 00:20:22.093 } 00:20:22.093 Got JSON-RPC error response 00:20:22.093 response: 00:20:22.093 { 00:20:22.093 "code": -22, 00:20:22.093 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:22.093 } 00:20:22.093 05:35:25 -- target/tls.sh@36 -- # killprocess 1854771 00:20:22.093 05:35:25 -- common/autotest_common.sh@936 -- # '[' -z 1854771 ']' 00:20:22.093 05:35:25 -- common/autotest_common.sh@940 -- # kill -0 1854771 00:20:22.093 05:35:25 -- common/autotest_common.sh@941 -- # uname 00:20:22.093 05:35:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:22.093 05:35:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1854771 00:20:22.093 05:35:25 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:22.093 05:35:25 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:22.093 05:35:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1854771' 00:20:22.093 killing process with pid 1854771 00:20:22.093 05:35:25 -- common/autotest_common.sh@955 -- # kill 1854771 00:20:22.093 Received shutdown signal, test time was about 10.000000 seconds 00:20:22.093 00:20:22.093 Latency(us) 00:20:22.093 [2024-12-07T04:35:25.333Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.093 [2024-12-07T04:35:25.333Z] =================================================================================================================== 00:20:22.093 [2024-12-07T04:35:25.333Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:22.093 05:35:25 -- common/autotest_common.sh@960 -- # wait 1854771 00:20:22.093 05:35:25 -- target/tls.sh@37 -- # return 1 00:20:22.093 05:35:25 -- common/autotest_common.sh@653 -- # es=1 00:20:22.093 05:35:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:22.093 05:35:25 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:22.093 05:35:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:22.093 05:35:25 -- target/tls.sh@183 -- # killprocess 1852361 00:20:22.093 05:35:25 -- common/autotest_common.sh@936 -- # '[' -z 1852361 ']' 00:20:22.093 05:35:25 -- common/autotest_common.sh@940 -- # kill -0 1852361 00:20:22.093 05:35:25 -- common/autotest_common.sh@941 -- # uname 00:20:22.094 05:35:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:22.354 05:35:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1852361 00:20:22.354 05:35:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:22.354 05:35:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:22.354 05:35:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1852361' 00:20:22.354 killing process with pid 1852361 00:20:22.354 05:35:25 -- common/autotest_common.sh@955 -- # kill 1852361 00:20:22.354 05:35:25 -- common/autotest_common.sh@960 -- # wait 1852361 00:20:22.354 05:35:25 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:22.354 05:35:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:22.354 05:35:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:22.354 05:35:25 -- common/autotest_common.sh@10 -- # set +x 00:20:22.354 05:35:25 -- nvmf/common.sh@469 -- # nvmfpid=1855117 00:20:22.354 05:35:25 -- nvmf/common.sh@470 -- # waitforlisten 1855117 00:20:22.354 05:35:25 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:22.354 05:35:25 -- common/autotest_common.sh@829 -- # '[' -z 1855117 ']' 00:20:22.354 05:35:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.354 05:35:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:22.354 05:35:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.354 05:35:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:22.354 05:35:25 -- common/autotest_common.sh@10 -- # set +x 00:20:22.354 [2024-12-07 05:35:25.565164] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:22.354 [2024-12-07 05:35:25.565214] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.613 EAL: No free 2048 kB hugepages reported on node 1 00:20:22.613 [2024-12-07 05:35:25.646722] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.613 [2024-12-07 05:35:25.697478] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:22.613 [2024-12-07 05:35:25.697573] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.613 [2024-12-07 05:35:25.697579] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.613 [2024-12-07 05:35:25.697584] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.613 [2024-12-07 05:35:25.697607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.182 05:35:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:23.182 05:35:26 -- common/autotest_common.sh@862 -- # return 0 00:20:23.182 05:35:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:23.182 05:35:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:23.182 05:35:26 -- common/autotest_common.sh@10 -- # set +x 00:20:23.182 05:35:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:23.182 05:35:26 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:23.182 05:35:26 -- common/autotest_common.sh@650 -- # local es=0 00:20:23.182 05:35:26 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:23.182 05:35:26 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:20:23.182 05:35:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:23.182 05:35:26 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:20:23.182 05:35:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:23.182 05:35:26 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:23.182 05:35:26 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:23.182 05:35:26 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:23.442 [2024-12-07 05:35:26.539911] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.442 05:35:26 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:23.702 05:35:26 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:23.702 [2024-12-07 05:35:26.860705] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:23.703 [2024-12-07 05:35:26.860919] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.703 05:35:26 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:23.963 malloc0 00:20:23.963 05:35:27 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:23.963 05:35:27 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:24.224 [2024-12-07 05:35:27.315403] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:24.224 [2024-12-07 05:35:27.315421] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:24.224 [2024-12-07 05:35:27.315433] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:20:24.224 request: 00:20:24.224 { 00:20:24.224 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.224 "host": "nqn.2016-06.io.spdk:host1", 00:20:24.224 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:24.224 "method": "nvmf_subsystem_add_host", 00:20:24.224 "req_id": 1 00:20:24.224 } 00:20:24.224 Got JSON-RPC error response 00:20:24.224 response: 00:20:24.224 { 00:20:24.224 "code": -32603, 00:20:24.224 "message": "Internal error" 00:20:24.224 } 00:20:24.224 05:35:27 -- common/autotest_common.sh@653 -- # es=1 00:20:24.224 05:35:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:24.224 05:35:27 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:24.224 05:35:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:24.224 05:35:27 -- target/tls.sh@189 -- # killprocess 1855117 00:20:24.224 05:35:27 -- common/autotest_common.sh@936 -- # '[' -z 1855117 ']' 00:20:24.224 05:35:27 -- common/autotest_common.sh@940 -- # kill -0 1855117 00:20:24.224 05:35:27 -- common/autotest_common.sh@941 -- # uname 00:20:24.224 05:35:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:24.224 05:35:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1855117 00:20:24.224 05:35:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:24.224 05:35:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:24.224 05:35:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1855117' 00:20:24.224 killing process with pid 1855117 00:20:24.224 05:35:27 -- common/autotest_common.sh@955 -- # kill 1855117 00:20:24.224 05:35:27 -- common/autotest_common.sh@960 -- # wait 1855117 00:20:24.485 05:35:27 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:24.485 05:35:27 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:20:24.485 05:35:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:24.485 05:35:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:24.485 05:35:27 -- common/autotest_common.sh@10 -- # set +x 00:20:24.485 05:35:27 -- nvmf/common.sh@469 -- # nvmfpid=1855494 00:20:24.485 05:35:27 -- nvmf/common.sh@470 -- # waitforlisten 1855494 00:20:24.485 05:35:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:24.485 05:35:27 -- common/autotest_common.sh@829 -- # '[' -z 1855494 ']' 00:20:24.485 05:35:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.485 05:35:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:24.485 05:35:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.485 05:35:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:24.485 05:35:27 -- common/autotest_common.sh@10 -- # set +x 00:20:24.485 [2024-12-07 05:35:27.577449] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:24.485 [2024-12-07 05:35:27.577509] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.485 EAL: No free 2048 kB hugepages reported on node 1 00:20:24.485 [2024-12-07 05:35:27.661414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.485 [2024-12-07 05:35:27.712308] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:24.485 [2024-12-07 05:35:27.712403] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.485 [2024-12-07 05:35:27.712409] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.485 [2024-12-07 05:35:27.712414] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.485 [2024-12-07 05:35:27.712430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.429 05:35:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:25.429 05:35:28 -- common/autotest_common.sh@862 -- # return 0 00:20:25.429 05:35:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:25.429 05:35:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:25.429 05:35:28 -- common/autotest_common.sh@10 -- # set +x 00:20:25.429 05:35:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.429 05:35:28 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:25.429 05:35:28 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:25.429 05:35:28 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:25.429 [2024-12-07 05:35:28.542795] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:25.429 05:35:28 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:25.689 05:35:28 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:25.689 [2024-12-07 05:35:28.843532] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:25.689 [2024-12-07 05:35:28.843730] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.689 05:35:28 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:25.950 malloc0 00:20:25.950 05:35:29 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:25.950 05:35:29 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:26.210 05:35:29 -- target/tls.sh@197 -- # bdevperf_pid=1855862 00:20:26.210 05:35:29 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:26.210 05:35:29 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:26.210 05:35:29 -- target/tls.sh@200 -- # waitforlisten 1855862 /var/tmp/bdevperf.sock 00:20:26.210 05:35:29 -- common/autotest_common.sh@829 -- # '[' -z 1855862 ']' 00:20:26.210 05:35:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:26.210 05:35:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:26.210 05:35:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:26.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:26.210 05:35:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:26.210 05:35:29 -- common/autotest_common.sh@10 -- # set +x 00:20:26.210 [2024-12-07 05:35:29.336256] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:26.210 [2024-12-07 05:35:29.336308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1855862 ] 00:20:26.210 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.210 [2024-12-07 05:35:29.387764] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.210 [2024-12-07 05:35:29.437851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:27.152 05:35:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:27.152 05:35:30 -- common/autotest_common.sh@862 -- # return 0 00:20:27.152 05:35:30 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:27.152 [2024-12-07 05:35:30.263154] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:27.152 TLSTESTn1 00:20:27.152 05:35:30 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:27.412 05:35:30 -- target/tls.sh@205 -- # tgtconf='{ 00:20:27.412 "subsystems": [ 00:20:27.412 { 00:20:27.412 "subsystem": "iobuf", 00:20:27.412 "config": [ 00:20:27.412 { 00:20:27.412 "method": "iobuf_set_options", 00:20:27.412 "params": { 00:20:27.412 "small_pool_count": 8192, 00:20:27.412 "large_pool_count": 1024, 00:20:27.412 "small_bufsize": 8192, 00:20:27.412 "large_bufsize": 135168 00:20:27.412 } 00:20:27.412 } 00:20:27.412 ] 00:20:27.412 }, 00:20:27.412 { 00:20:27.412 "subsystem": "sock", 00:20:27.412 "config": [ 00:20:27.412 { 00:20:27.412 "method": "sock_impl_set_options", 00:20:27.412 "params": { 00:20:27.412 "impl_name": "posix", 00:20:27.412 "recv_buf_size": 2097152, 00:20:27.412 "send_buf_size": 2097152, 00:20:27.412 "enable_recv_pipe": true, 00:20:27.412 "enable_quickack": false, 00:20:27.412 "enable_placement_id": 0, 00:20:27.412 "enable_zerocopy_send_server": true, 00:20:27.412 "enable_zerocopy_send_client": false, 00:20:27.412 "zerocopy_threshold": 0, 00:20:27.412 "tls_version": 0, 00:20:27.412 "enable_ktls": false 00:20:27.412 } 00:20:27.412 }, 00:20:27.412 { 00:20:27.412 "method": "sock_impl_set_options", 00:20:27.412 "params": { 00:20:27.412 "impl_name": "ssl", 00:20:27.412 "recv_buf_size": 4096, 00:20:27.412 "send_buf_size": 4096, 00:20:27.412 "enable_recv_pipe": true, 00:20:27.412 "enable_quickack": false, 00:20:27.412 "enable_placement_id": 0, 00:20:27.412 "enable_zerocopy_send_server": true, 00:20:27.412 "enable_zerocopy_send_client": false, 00:20:27.412 "zerocopy_threshold": 0, 00:20:27.412 "tls_version": 0, 00:20:27.412 "enable_ktls": false 00:20:27.412 } 00:20:27.412 } 00:20:27.412 ] 00:20:27.412 }, 00:20:27.412 { 00:20:27.412 "subsystem": "vmd", 00:20:27.412 "config": [] 00:20:27.412 }, 00:20:27.412 { 00:20:27.412 "subsystem": "accel", 00:20:27.412 "config": [ 00:20:27.412 { 00:20:27.412 "method": "accel_set_options", 00:20:27.412 "params": { 00:20:27.412 "small_cache_size": 128, 00:20:27.412 "large_cache_size": 16, 00:20:27.412 "task_count": 2048, 00:20:27.412 "sequence_count": 2048, 00:20:27.412 "buf_count": 2048 00:20:27.412 } 00:20:27.412 } 00:20:27.412 ] 00:20:27.412 }, 00:20:27.412 { 00:20:27.412 "subsystem": "bdev", 00:20:27.412 "config": [ 00:20:27.412 { 00:20:27.412 "method": "bdev_set_options", 00:20:27.412 "params": { 00:20:27.412 "bdev_io_pool_size": 65535, 00:20:27.412 "bdev_io_cache_size": 256, 00:20:27.412 "bdev_auto_examine": true, 00:20:27.412 "iobuf_small_cache_size": 128, 00:20:27.412 "iobuf_large_cache_size": 16 00:20:27.412 } 00:20:27.412 }, 00:20:27.412 { 00:20:27.412 "method": "bdev_raid_set_options", 00:20:27.412 "params": { 00:20:27.413 "process_window_size_kb": 1024 00:20:27.413 } 00:20:27.413 }, 00:20:27.413 { 00:20:27.413 "method": "bdev_iscsi_set_options", 00:20:27.413 "params": { 00:20:27.413 "timeout_sec": 30 00:20:27.413 } 00:20:27.413 }, 00:20:27.413 { 00:20:27.413 "method": "bdev_nvme_set_options", 00:20:27.413 "params": { 00:20:27.413 "action_on_timeout": "none", 00:20:27.413 "timeout_us": 0, 00:20:27.413 "timeout_admin_us": 0, 00:20:27.413 "keep_alive_timeout_ms": 10000, 00:20:27.413 "transport_retry_count": 4, 00:20:27.413 "arbitration_burst": 0, 00:20:27.413 "low_priority_weight": 0, 00:20:27.413 "medium_priority_weight": 0, 00:20:27.413 "high_priority_weight": 0, 00:20:27.413 "nvme_adminq_poll_period_us": 10000, 00:20:27.413 "nvme_ioq_poll_period_us": 0, 00:20:27.413 "io_queue_requests": 0, 00:20:27.413 "delay_cmd_submit": true, 00:20:27.413 "bdev_retry_count": 3, 00:20:27.413 "transport_ack_timeout": 0, 00:20:27.413 "ctrlr_loss_timeout_sec": 0, 00:20:27.413 "reconnect_delay_sec": 0, 00:20:27.413 "fast_io_fail_timeout_sec": 0, 00:20:27.413 "generate_uuids": false, 00:20:27.413 "transport_tos": 0, 00:20:27.413 "io_path_stat": false, 00:20:27.413 "allow_accel_sequence": false 00:20:27.413 } 00:20:27.413 }, 00:20:27.413 { 00:20:27.413 "method": "bdev_nvme_set_hotplug", 00:20:27.413 "params": { 00:20:27.413 "period_us": 100000, 00:20:27.413 "enable": false 00:20:27.413 } 00:20:27.413 }, 00:20:27.413 { 00:20:27.413 "method": "bdev_malloc_create", 00:20:27.413 "params": { 00:20:27.413 "name": "malloc0", 00:20:27.413 "num_blocks": 8192, 00:20:27.413 "block_size": 4096, 00:20:27.413 "physical_block_size": 4096, 00:20:27.413 "uuid": "87a1ab59-8919-40e4-83f3-8f05daf214f2", 00:20:27.413 "optimal_io_boundary": 0 00:20:27.413 } 00:20:27.413 }, 00:20:27.413 { 00:20:27.413 "method": "bdev_wait_for_examine" 00:20:27.413 } 00:20:27.413 ] 00:20:27.413 }, 00:20:27.413 { 00:20:27.413 "subsystem": "nbd", 00:20:27.413 "config": [] 00:20:27.413 }, 00:20:27.413 { 00:20:27.413 "subsystem": "scheduler", 00:20:27.413 "config": [ 00:20:27.413 { 00:20:27.413 "method": "framework_set_scheduler", 00:20:27.413 "params": { 00:20:27.413 "name": "static" 00:20:27.413 } 00:20:27.413 } 00:20:27.413 ] 00:20:27.413 }, 00:20:27.413 { 00:20:27.413 "subsystem": "nvmf", 00:20:27.413 "config": [ 00:20:27.413 { 00:20:27.413 "method": "nvmf_set_config", 00:20:27.413 "params": { 00:20:27.413 "discovery_filter": "match_any", 00:20:27.413 "admin_cmd_passthru": { 00:20:27.413 "identify_ctrlr": false 00:20:27.413 } 00:20:27.413 } 00:20:27.413 }, 00:20:27.413 { 00:20:27.413 "method": "nvmf_set_max_subsystems", 00:20:27.413 "params": { 00:20:27.413 "max_subsystems": 1024 00:20:27.413 } 00:20:27.413 }, 00:20:27.413 { 00:20:27.413 "method": "nvmf_set_crdt", 00:20:27.413 "params": { 00:20:27.413 "crdt1": 0, 00:20:27.413 "crdt2": 0, 00:20:27.413 "crdt3": 0 00:20:27.413 } 00:20:27.413 }, 00:20:27.413 { 00:20:27.413 "method": "nvmf_create_transport", 00:20:27.413 "params": { 00:20:27.413 "trtype": "TCP", 00:20:27.413 "max_queue_depth": 128, 00:20:27.413 "max_io_qpairs_per_ctrlr": 127, 00:20:27.413 "in_capsule_data_size": 4096, 00:20:27.413 "max_io_size": 131072, 00:20:27.413 "io_unit_size": 131072, 00:20:27.413 "max_aq_depth": 128, 00:20:27.413 "num_shared_buffers": 511, 00:20:27.413 "buf_cache_size": 4294967295, 00:20:27.413 "dif_insert_or_strip": false, 00:20:27.413 "zcopy": false, 00:20:27.413 "c2h_success": false, 00:20:27.413 "sock_priority": 0, 00:20:27.413 "abort_timeout_sec": 1 00:20:27.413 } 00:20:27.413 }, 00:20:27.413 { 00:20:27.413 "method": "nvmf_create_subsystem", 00:20:27.413 "params": { 00:20:27.413 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.413 "allow_any_host": false, 00:20:27.413 "serial_number": "SPDK00000000000001", 00:20:27.413 "model_number": "SPDK bdev Controller", 00:20:27.413 "max_namespaces": 10, 00:20:27.413 "min_cntlid": 1, 00:20:27.413 "max_cntlid": 65519, 00:20:27.413 "ana_reporting": false 00:20:27.413 } 00:20:27.413 }, 00:20:27.413 { 00:20:27.413 "method": "nvmf_subsystem_add_host", 00:20:27.413 "params": { 00:20:27.413 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.413 "host": "nqn.2016-06.io.spdk:host1", 00:20:27.413 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:27.413 } 00:20:27.413 }, 00:20:27.413 { 00:20:27.413 "method": "nvmf_subsystem_add_ns", 00:20:27.413 "params": { 00:20:27.413 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.413 "namespace": { 00:20:27.413 "nsid": 1, 00:20:27.413 "bdev_name": "malloc0", 00:20:27.413 "nguid": "87A1AB59891940E483F38F05DAF214F2", 00:20:27.413 "uuid": "87a1ab59-8919-40e4-83f3-8f05daf214f2" 00:20:27.413 } 00:20:27.413 } 00:20:27.413 }, 00:20:27.413 { 00:20:27.413 "method": "nvmf_subsystem_add_listener", 00:20:27.413 "params": { 00:20:27.413 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.413 "listen_address": { 00:20:27.413 "trtype": "TCP", 00:20:27.413 "adrfam": "IPv4", 00:20:27.413 "traddr": "10.0.0.2", 00:20:27.413 "trsvcid": "4420" 00:20:27.413 }, 00:20:27.413 "secure_channel": true 00:20:27.413 } 00:20:27.413 } 00:20:27.413 ] 00:20:27.413 } 00:20:27.413 ] 00:20:27.413 }' 00:20:27.413 05:35:30 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:27.674 05:35:30 -- target/tls.sh@206 -- # bdevperfconf='{ 00:20:27.674 "subsystems": [ 00:20:27.674 { 00:20:27.674 "subsystem": "iobuf", 00:20:27.674 "config": [ 00:20:27.674 { 00:20:27.674 "method": "iobuf_set_options", 00:20:27.674 "params": { 00:20:27.674 "small_pool_count": 8192, 00:20:27.674 "large_pool_count": 1024, 00:20:27.674 "small_bufsize": 8192, 00:20:27.674 "large_bufsize": 135168 00:20:27.674 } 00:20:27.674 } 00:20:27.674 ] 00:20:27.674 }, 00:20:27.674 { 00:20:27.674 "subsystem": "sock", 00:20:27.674 "config": [ 00:20:27.674 { 00:20:27.674 "method": "sock_impl_set_options", 00:20:27.674 "params": { 00:20:27.674 "impl_name": "posix", 00:20:27.674 "recv_buf_size": 2097152, 00:20:27.674 "send_buf_size": 2097152, 00:20:27.674 "enable_recv_pipe": true, 00:20:27.674 "enable_quickack": false, 00:20:27.674 "enable_placement_id": 0, 00:20:27.674 "enable_zerocopy_send_server": true, 00:20:27.674 "enable_zerocopy_send_client": false, 00:20:27.674 "zerocopy_threshold": 0, 00:20:27.674 "tls_version": 0, 00:20:27.674 "enable_ktls": false 00:20:27.674 } 00:20:27.674 }, 00:20:27.674 { 00:20:27.674 "method": "sock_impl_set_options", 00:20:27.674 "params": { 00:20:27.674 "impl_name": "ssl", 00:20:27.674 "recv_buf_size": 4096, 00:20:27.674 "send_buf_size": 4096, 00:20:27.674 "enable_recv_pipe": true, 00:20:27.674 "enable_quickack": false, 00:20:27.674 "enable_placement_id": 0, 00:20:27.674 "enable_zerocopy_send_server": true, 00:20:27.674 "enable_zerocopy_send_client": false, 00:20:27.674 "zerocopy_threshold": 0, 00:20:27.674 "tls_version": 0, 00:20:27.674 "enable_ktls": false 00:20:27.674 } 00:20:27.674 } 00:20:27.674 ] 00:20:27.674 }, 00:20:27.674 { 00:20:27.674 "subsystem": "vmd", 00:20:27.674 "config": [] 00:20:27.674 }, 00:20:27.674 { 00:20:27.674 "subsystem": "accel", 00:20:27.674 "config": [ 00:20:27.674 { 00:20:27.674 "method": "accel_set_options", 00:20:27.674 "params": { 00:20:27.674 "small_cache_size": 128, 00:20:27.674 "large_cache_size": 16, 00:20:27.674 "task_count": 2048, 00:20:27.674 "sequence_count": 2048, 00:20:27.674 "buf_count": 2048 00:20:27.674 } 00:20:27.674 } 00:20:27.674 ] 00:20:27.674 }, 00:20:27.674 { 00:20:27.674 "subsystem": "bdev", 00:20:27.674 "config": [ 00:20:27.674 { 00:20:27.674 "method": "bdev_set_options", 00:20:27.674 "params": { 00:20:27.674 "bdev_io_pool_size": 65535, 00:20:27.674 "bdev_io_cache_size": 256, 00:20:27.674 "bdev_auto_examine": true, 00:20:27.674 "iobuf_small_cache_size": 128, 00:20:27.674 "iobuf_large_cache_size": 16 00:20:27.674 } 00:20:27.674 }, 00:20:27.674 { 00:20:27.674 "method": "bdev_raid_set_options", 00:20:27.674 "params": { 00:20:27.674 "process_window_size_kb": 1024 00:20:27.674 } 00:20:27.674 }, 00:20:27.674 { 00:20:27.674 "method": "bdev_iscsi_set_options", 00:20:27.674 "params": { 00:20:27.674 "timeout_sec": 30 00:20:27.674 } 00:20:27.674 }, 00:20:27.674 { 00:20:27.674 "method": "bdev_nvme_set_options", 00:20:27.674 "params": { 00:20:27.674 "action_on_timeout": "none", 00:20:27.674 "timeout_us": 0, 00:20:27.674 "timeout_admin_us": 0, 00:20:27.674 "keep_alive_timeout_ms": 10000, 00:20:27.674 "transport_retry_count": 4, 00:20:27.674 "arbitration_burst": 0, 00:20:27.674 "low_priority_weight": 0, 00:20:27.674 "medium_priority_weight": 0, 00:20:27.674 "high_priority_weight": 0, 00:20:27.674 "nvme_adminq_poll_period_us": 10000, 00:20:27.674 "nvme_ioq_poll_period_us": 0, 00:20:27.674 "io_queue_requests": 512, 00:20:27.674 "delay_cmd_submit": true, 00:20:27.674 "bdev_retry_count": 3, 00:20:27.674 "transport_ack_timeout": 0, 00:20:27.674 "ctrlr_loss_timeout_sec": 0, 00:20:27.674 "reconnect_delay_sec": 0, 00:20:27.674 "fast_io_fail_timeout_sec": 0, 00:20:27.674 "generate_uuids": false, 00:20:27.674 "transport_tos": 0, 00:20:27.674 "io_path_stat": false, 00:20:27.674 "allow_accel_sequence": false 00:20:27.674 } 00:20:27.674 }, 00:20:27.674 { 00:20:27.674 "method": "bdev_nvme_attach_controller", 00:20:27.674 "params": { 00:20:27.674 "name": "TLSTEST", 00:20:27.674 "trtype": "TCP", 00:20:27.674 "adrfam": "IPv4", 00:20:27.674 "traddr": "10.0.0.2", 00:20:27.675 "trsvcid": "4420", 00:20:27.675 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.675 "prchk_reftag": false, 00:20:27.675 "prchk_guard": false, 00:20:27.675 "ctrlr_loss_timeout_sec": 0, 00:20:27.675 "reconnect_delay_sec": 0, 00:20:27.675 "fast_io_fail_timeout_sec": 0, 00:20:27.675 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:27.675 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:27.675 "hdgst": false, 00:20:27.675 "ddgst": false 00:20:27.675 } 00:20:27.675 }, 00:20:27.675 { 00:20:27.675 "method": "bdev_nvme_set_hotplug", 00:20:27.675 "params": { 00:20:27.675 "period_us": 100000, 00:20:27.675 "enable": false 00:20:27.675 } 00:20:27.675 }, 00:20:27.675 { 00:20:27.675 "method": "bdev_wait_for_examine" 00:20:27.675 } 00:20:27.675 ] 00:20:27.675 }, 00:20:27.675 { 00:20:27.675 "subsystem": "nbd", 00:20:27.675 "config": [] 00:20:27.675 } 00:20:27.675 ] 00:20:27.675 }' 00:20:27.675 05:35:30 -- target/tls.sh@208 -- # killprocess 1855862 00:20:27.675 05:35:30 -- common/autotest_common.sh@936 -- # '[' -z 1855862 ']' 00:20:27.675 05:35:30 -- common/autotest_common.sh@940 -- # kill -0 1855862 00:20:27.675 05:35:30 -- common/autotest_common.sh@941 -- # uname 00:20:27.675 05:35:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:27.675 05:35:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1855862 00:20:27.935 05:35:30 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:27.935 05:35:30 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:27.935 05:35:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1855862' 00:20:27.935 killing process with pid 1855862 00:20:27.935 05:35:30 -- common/autotest_common.sh@955 -- # kill 1855862 00:20:27.935 Received shutdown signal, test time was about 10.000000 seconds 00:20:27.935 00:20:27.935 Latency(us) 00:20:27.935 [2024-12-07T04:35:31.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.935 [2024-12-07T04:35:31.175Z] =================================================================================================================== 00:20:27.935 [2024-12-07T04:35:31.175Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:27.935 05:35:30 -- common/autotest_common.sh@960 -- # wait 1855862 00:20:27.935 05:35:31 -- target/tls.sh@209 -- # killprocess 1855494 00:20:27.935 05:35:31 -- common/autotest_common.sh@936 -- # '[' -z 1855494 ']' 00:20:27.935 05:35:31 -- common/autotest_common.sh@940 -- # kill -0 1855494 00:20:27.935 05:35:31 -- common/autotest_common.sh@941 -- # uname 00:20:27.935 05:35:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:27.935 05:35:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1855494 00:20:27.935 05:35:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:27.935 05:35:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:27.935 05:35:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1855494' 00:20:27.935 killing process with pid 1855494 00:20:27.935 05:35:31 -- common/autotest_common.sh@955 -- # kill 1855494 00:20:27.935 05:35:31 -- common/autotest_common.sh@960 -- # wait 1855494 00:20:28.197 05:35:31 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:28.197 05:35:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:28.197 05:35:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:28.197 05:35:31 -- common/autotest_common.sh@10 -- # set +x 00:20:28.197 05:35:31 -- target/tls.sh@212 -- # echo '{ 00:20:28.197 "subsystems": [ 00:20:28.197 { 00:20:28.197 "subsystem": "iobuf", 00:20:28.197 "config": [ 00:20:28.197 { 00:20:28.197 "method": "iobuf_set_options", 00:20:28.197 "params": { 00:20:28.197 "small_pool_count": 8192, 00:20:28.197 "large_pool_count": 1024, 00:20:28.197 "small_bufsize": 8192, 00:20:28.197 "large_bufsize": 135168 00:20:28.197 } 00:20:28.197 } 00:20:28.197 ] 00:20:28.197 }, 00:20:28.197 { 00:20:28.197 "subsystem": "sock", 00:20:28.197 "config": [ 00:20:28.197 { 00:20:28.197 "method": "sock_impl_set_options", 00:20:28.197 "params": { 00:20:28.197 "impl_name": "posix", 00:20:28.197 "recv_buf_size": 2097152, 00:20:28.197 "send_buf_size": 2097152, 00:20:28.197 "enable_recv_pipe": true, 00:20:28.197 "enable_quickack": false, 00:20:28.197 "enable_placement_id": 0, 00:20:28.197 "enable_zerocopy_send_server": true, 00:20:28.197 "enable_zerocopy_send_client": false, 00:20:28.197 "zerocopy_threshold": 0, 00:20:28.197 "tls_version": 0, 00:20:28.197 "enable_ktls": false 00:20:28.197 } 00:20:28.197 }, 00:20:28.197 { 00:20:28.197 "method": "sock_impl_set_options", 00:20:28.197 "params": { 00:20:28.197 "impl_name": "ssl", 00:20:28.197 "recv_buf_size": 4096, 00:20:28.197 "send_buf_size": 4096, 00:20:28.197 "enable_recv_pipe": true, 00:20:28.197 "enable_quickack": false, 00:20:28.197 "enable_placement_id": 0, 00:20:28.197 "enable_zerocopy_send_server": true, 00:20:28.197 "enable_zerocopy_send_client": false, 00:20:28.197 "zerocopy_threshold": 0, 00:20:28.197 "tls_version": 0, 00:20:28.197 "enable_ktls": false 00:20:28.197 } 00:20:28.197 } 00:20:28.197 ] 00:20:28.197 }, 00:20:28.197 { 00:20:28.197 "subsystem": "vmd", 00:20:28.197 "config": [] 00:20:28.197 }, 00:20:28.197 { 00:20:28.197 "subsystem": "accel", 00:20:28.197 "config": [ 00:20:28.197 { 00:20:28.197 "method": "accel_set_options", 00:20:28.197 "params": { 00:20:28.197 "small_cache_size": 128, 00:20:28.197 "large_cache_size": 16, 00:20:28.197 "task_count": 2048, 00:20:28.197 "sequence_count": 2048, 00:20:28.197 "buf_count": 2048 00:20:28.197 } 00:20:28.197 } 00:20:28.197 ] 00:20:28.197 }, 00:20:28.197 { 00:20:28.197 "subsystem": "bdev", 00:20:28.197 "config": [ 00:20:28.197 { 00:20:28.197 "method": "bdev_set_options", 00:20:28.197 "params": { 00:20:28.197 "bdev_io_pool_size": 65535, 00:20:28.197 "bdev_io_cache_size": 256, 00:20:28.197 "bdev_auto_examine": true, 00:20:28.197 "iobuf_small_cache_size": 128, 00:20:28.197 "iobuf_large_cache_size": 16 00:20:28.197 } 00:20:28.197 }, 00:20:28.197 { 00:20:28.197 "method": "bdev_raid_set_options", 00:20:28.197 "params": { 00:20:28.197 "process_window_size_kb": 1024 00:20:28.197 } 00:20:28.197 }, 00:20:28.197 { 00:20:28.197 "method": "bdev_iscsi_set_options", 00:20:28.197 "params": { 00:20:28.197 "timeout_sec": 30 00:20:28.197 } 00:20:28.197 }, 00:20:28.197 { 00:20:28.197 "method": "bdev_nvme_set_options", 00:20:28.197 "params": { 00:20:28.197 "action_on_timeout": "none", 00:20:28.197 "timeout_us": 0, 00:20:28.197 "timeout_admin_us": 0, 00:20:28.197 "keep_alive_timeout_ms": 10000, 00:20:28.197 "transport_retry_count": 4, 00:20:28.197 "arbitration_burst": 0, 00:20:28.197 "low_priority_weight": 0, 00:20:28.197 "medium_priority_weight": 0, 00:20:28.197 "high_priority_weight": 0, 00:20:28.197 "nvme_adminq_poll_period_us": 10000, 00:20:28.197 "nvme_ioq_poll_period_us": 0, 00:20:28.197 "io_queue_requests": 0, 00:20:28.197 "delay_cmd_submit": true, 00:20:28.197 "bdev_retry_count": 3, 00:20:28.197 "transport_ack_timeout": 0, 00:20:28.197 "ctrlr_loss_timeout_sec": 0, 00:20:28.197 "reconnect_delay_sec": 0, 00:20:28.197 "fast_io_fail_timeout_sec": 0, 00:20:28.197 "generate_uuids": false, 00:20:28.197 "transport_tos": 0, 00:20:28.197 "io_path_stat": false, 00:20:28.197 "allow_accel_sequence": false 00:20:28.197 } 00:20:28.197 }, 00:20:28.197 { 00:20:28.197 "method": "bdev_nvme_set_hotplug", 00:20:28.197 "params": { 00:20:28.197 "period_us": 100000, 00:20:28.197 "enable": false 00:20:28.197 } 00:20:28.197 }, 00:20:28.197 { 00:20:28.197 "method": "bdev_malloc_create", 00:20:28.197 "params": { 00:20:28.197 "name": "malloc0", 00:20:28.197 "num_blocks": 8192, 00:20:28.197 "block_size": 4096, 00:20:28.197 "physical_block_size": 4096, 00:20:28.197 "uuid": "87a1ab59-8919-40e4-83f3-8f05daf214f2", 00:20:28.197 "optimal_io_boundary": 0 00:20:28.197 } 00:20:28.197 }, 00:20:28.197 { 00:20:28.197 "method": "bdev_wait_for_examine" 00:20:28.197 } 00:20:28.197 ] 00:20:28.197 }, 00:20:28.197 { 00:20:28.197 "subsystem": "nbd", 00:20:28.197 "config": [] 00:20:28.197 }, 00:20:28.197 { 00:20:28.197 "subsystem": "scheduler", 00:20:28.197 "config": [ 00:20:28.197 { 00:20:28.197 "method": "framework_set_scheduler", 00:20:28.197 "params": { 00:20:28.197 "name": "static" 00:20:28.197 } 00:20:28.197 } 00:20:28.197 ] 00:20:28.197 }, 00:20:28.197 { 00:20:28.197 "subsystem": "nvmf", 00:20:28.197 "config": [ 00:20:28.197 { 00:20:28.197 "method": "nvmf_set_config", 00:20:28.197 "params": { 00:20:28.197 "discovery_filter": "match_any", 00:20:28.197 "admin_cmd_passthru": { 00:20:28.197 "identify_ctrlr": false 00:20:28.197 } 00:20:28.197 } 00:20:28.197 }, 00:20:28.197 { 00:20:28.197 "method": "nvmf_set_max_subsystems", 00:20:28.197 "params": { 00:20:28.197 "max_subsystems": 1024 00:20:28.197 } 00:20:28.197 }, 00:20:28.197 { 00:20:28.197 "method": "nvmf_set_crdt", 00:20:28.197 "params": { 00:20:28.197 "crdt1": 0, 00:20:28.197 "crdt2": 0, 00:20:28.197 "crdt3": 0 00:20:28.197 } 00:20:28.197 }, 00:20:28.197 { 00:20:28.197 "method": "nvmf_create_transport", 00:20:28.197 "params": { 00:20:28.197 "trtype": "TCP", 00:20:28.197 "max_queue_depth": 128, 00:20:28.197 "max_io_qpairs_per_ctrlr": 127, 00:20:28.197 "in_capsule_data_size": 4096, 00:20:28.197 "max_io_size": 131072, 00:20:28.197 "io_unit_size": 131072, 00:20:28.197 "max_aq_depth": 128, 00:20:28.197 "num_shared_buffers": 511, 00:20:28.197 "buf_cache_size": 4294967295, 00:20:28.197 "dif_insert_or_strip": false, 00:20:28.197 "zcopy": false, 00:20:28.197 "c2h_success": false, 00:20:28.197 "sock_priority": 0, 00:20:28.198 "abort_timeout_sec": 1 00:20:28.198 } 00:20:28.198 }, 00:20:28.198 { 00:20:28.198 "method": "nvmf_create_subsystem", 00:20:28.198 "params": { 00:20:28.198 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.198 "allow_any_host": false, 00:20:28.198 "serial_number": "SPDK00000000000001", 00:20:28.198 "model_number": "SPDK bdev Controller", 00:20:28.198 "max_namespaces": 10, 00:20:28.198 "min_cntlid": 1, 00:20:28.198 "max_cntlid": 65519, 00:20:28.198 "ana_reporting": false 00:20:28.198 } 00:20:28.198 }, 00:20:28.198 { 00:20:28.198 "method": "nvmf_subsystem_add_host", 00:20:28.198 "params": { 00:20:28.198 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.198 "host": "nqn.2016-06.io.spdk:host1", 00:20:28.198 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:28.198 } 00:20:28.198 }, 00:20:28.198 { 00:20:28.198 "method": "nvmf_subsystem_add_ns", 00:20:28.198 "params": { 00:20:28.198 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.198 "namespace": { 00:20:28.198 "nsid": 1, 00:20:28.198 "bdev_name": "malloc0", 00:20:28.198 "nguid": "87A1AB59891940E483F38F05DAF214F2", 00:20:28.198 "uuid": "87a1ab59-8919-40e4-83f3-8f05daf214f2" 00:20:28.198 } 00:20:28.198 } 00:20:28.198 }, 00:20:28.198 { 00:20:28.198 "method": "nvmf_subsystem_add_listener", 00:20:28.198 "params": { 00:20:28.198 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.198 "listen_address": { 00:20:28.198 "trtype": "TCP", 00:20:28.198 "adrfam": "IPv4", 00:20:28.198 "traddr": "10.0.0.2", 00:20:28.198 "trsvcid": "4420" 00:20:28.198 }, 00:20:28.198 "secure_channel": true 00:20:28.198 } 00:20:28.198 } 00:20:28.198 ] 00:20:28.198 } 00:20:28.198 ] 00:20:28.198 }' 00:20:28.198 05:35:31 -- nvmf/common.sh@469 -- # nvmfpid=1856220 00:20:28.198 05:35:31 -- nvmf/common.sh@470 -- # waitforlisten 1856220 00:20:28.198 05:35:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:28.198 05:35:31 -- common/autotest_common.sh@829 -- # '[' -z 1856220 ']' 00:20:28.198 05:35:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.198 05:35:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:28.198 05:35:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.198 05:35:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:28.198 05:35:31 -- common/autotest_common.sh@10 -- # set +x 00:20:28.198 [2024-12-07 05:35:31.270770] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:28.198 [2024-12-07 05:35:31.270822] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.198 EAL: No free 2048 kB hugepages reported on node 1 00:20:28.198 [2024-12-07 05:35:31.355079] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.198 [2024-12-07 05:35:31.410346] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:28.198 [2024-12-07 05:35:31.410442] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:28.198 [2024-12-07 05:35:31.410448] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:28.198 [2024-12-07 05:35:31.410454] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:28.198 [2024-12-07 05:35:31.410475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:28.459 [2024-12-07 05:35:31.585554] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:28.459 [2024-12-07 05:35:31.617586] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:28.459 [2024-12-07 05:35:31.617786] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.031 05:35:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:29.031 05:35:32 -- common/autotest_common.sh@862 -- # return 0 00:20:29.031 05:35:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:29.031 05:35:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:29.031 05:35:32 -- common/autotest_common.sh@10 -- # set +x 00:20:29.031 05:35:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:29.031 05:35:32 -- target/tls.sh@216 -- # bdevperf_pid=1856568 00:20:29.031 05:35:32 -- target/tls.sh@217 -- # waitforlisten 1856568 /var/tmp/bdevperf.sock 00:20:29.031 05:35:32 -- common/autotest_common.sh@829 -- # '[' -z 1856568 ']' 00:20:29.031 05:35:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:29.031 05:35:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:29.031 05:35:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:29.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:29.031 05:35:32 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:29.031 05:35:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:29.031 05:35:32 -- common/autotest_common.sh@10 -- # set +x 00:20:29.031 05:35:32 -- target/tls.sh@213 -- # echo '{ 00:20:29.031 "subsystems": [ 00:20:29.031 { 00:20:29.031 "subsystem": "iobuf", 00:20:29.031 "config": [ 00:20:29.031 { 00:20:29.031 "method": "iobuf_set_options", 00:20:29.031 "params": { 00:20:29.032 "small_pool_count": 8192, 00:20:29.032 "large_pool_count": 1024, 00:20:29.032 "small_bufsize": 8192, 00:20:29.032 "large_bufsize": 135168 00:20:29.032 } 00:20:29.032 } 00:20:29.032 ] 00:20:29.032 }, 00:20:29.032 { 00:20:29.032 "subsystem": "sock", 00:20:29.032 "config": [ 00:20:29.032 { 00:20:29.032 "method": "sock_impl_set_options", 00:20:29.032 "params": { 00:20:29.032 "impl_name": "posix", 00:20:29.032 "recv_buf_size": 2097152, 00:20:29.032 "send_buf_size": 2097152, 00:20:29.032 "enable_recv_pipe": true, 00:20:29.032 "enable_quickack": false, 00:20:29.032 "enable_placement_id": 0, 00:20:29.032 "enable_zerocopy_send_server": true, 00:20:29.032 "enable_zerocopy_send_client": false, 00:20:29.032 "zerocopy_threshold": 0, 00:20:29.032 "tls_version": 0, 00:20:29.032 "enable_ktls": false 00:20:29.032 } 00:20:29.032 }, 00:20:29.032 { 00:20:29.032 "method": "sock_impl_set_options", 00:20:29.032 "params": { 00:20:29.032 "impl_name": "ssl", 00:20:29.032 "recv_buf_size": 4096, 00:20:29.032 "send_buf_size": 4096, 00:20:29.032 "enable_recv_pipe": true, 00:20:29.032 "enable_quickack": false, 00:20:29.032 "enable_placement_id": 0, 00:20:29.032 "enable_zerocopy_send_server": true, 00:20:29.032 "enable_zerocopy_send_client": false, 00:20:29.032 "zerocopy_threshold": 0, 00:20:29.032 "tls_version": 0, 00:20:29.032 "enable_ktls": false 00:20:29.032 } 00:20:29.032 } 00:20:29.032 ] 00:20:29.032 }, 00:20:29.032 { 00:20:29.032 "subsystem": "vmd", 00:20:29.032 "config": [] 00:20:29.032 }, 00:20:29.032 { 00:20:29.032 "subsystem": "accel", 00:20:29.032 "config": [ 00:20:29.032 { 00:20:29.032 "method": "accel_set_options", 00:20:29.032 "params": { 00:20:29.032 "small_cache_size": 128, 00:20:29.032 "large_cache_size": 16, 00:20:29.032 "task_count": 2048, 00:20:29.032 "sequence_count": 2048, 00:20:29.032 "buf_count": 2048 00:20:29.032 } 00:20:29.032 } 00:20:29.032 ] 00:20:29.032 }, 00:20:29.032 { 00:20:29.032 "subsystem": "bdev", 00:20:29.032 "config": [ 00:20:29.032 { 00:20:29.032 "method": "bdev_set_options", 00:20:29.032 "params": { 00:20:29.032 "bdev_io_pool_size": 65535, 00:20:29.032 "bdev_io_cache_size": 256, 00:20:29.032 "bdev_auto_examine": true, 00:20:29.032 "iobuf_small_cache_size": 128, 00:20:29.032 "iobuf_large_cache_size": 16 00:20:29.032 } 00:20:29.032 }, 00:20:29.032 { 00:20:29.032 "method": "bdev_raid_set_options", 00:20:29.032 "params": { 00:20:29.032 "process_window_size_kb": 1024 00:20:29.032 } 00:20:29.032 }, 00:20:29.032 { 00:20:29.032 "method": "bdev_iscsi_set_options", 00:20:29.032 "params": { 00:20:29.032 "timeout_sec": 30 00:20:29.032 } 00:20:29.032 }, 00:20:29.032 { 00:20:29.032 "method": "bdev_nvme_set_options", 00:20:29.032 "params": { 00:20:29.032 "action_on_timeout": "none", 00:20:29.032 "timeout_us": 0, 00:20:29.032 "timeout_admin_us": 0, 00:20:29.032 "keep_alive_timeout_ms": 10000, 00:20:29.032 "transport_retry_count": 4, 00:20:29.032 "arbitration_burst": 0, 00:20:29.032 "low_priority_weight": 0, 00:20:29.032 "medium_priority_weight": 0, 00:20:29.032 "high_priority_weight": 0, 00:20:29.032 "nvme_adminq_poll_period_us": 10000, 00:20:29.032 "nvme_ioq_poll_period_us": 0, 00:20:29.032 "io_queue_requests": 512, 00:20:29.032 "delay_cmd_submit": true, 00:20:29.032 "bdev_retry_count": 3, 00:20:29.032 "transport_ack_timeout": 0, 00:20:29.032 "ctrlr_loss_timeout_sec": 0, 00:20:29.032 "reconnect_delay_sec": 0, 00:20:29.032 "fast_io_fail_timeout_sec": 0, 00:20:29.032 "generate_uuids": false, 00:20:29.032 "transport_tos": 0, 00:20:29.032 "io_path_stat": false, 00:20:29.032 "allow_accel_sequence": false 00:20:29.032 } 00:20:29.032 }, 00:20:29.032 { 00:20:29.032 "method": "bdev_nvme_attach_controller", 00:20:29.032 "params": { 00:20:29.032 "name": "TLSTEST", 00:20:29.032 "trtype": "TCP", 00:20:29.032 "adrfam": "IPv4", 00:20:29.032 "traddr": "10.0.0.2", 00:20:29.032 "trsvcid": "4420", 00:20:29.032 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.032 "prchk_reftag": false, 00:20:29.032 "prchk_guard": false, 00:20:29.032 "ctrlr_loss_timeout_sec": 0, 00:20:29.032 "reconnect_delay_sec": 0, 00:20:29.032 "fast_io_fail_timeout_sec": 0, 00:20:29.032 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:29.032 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:29.032 "hdgst": false, 00:20:29.032 "ddgst": false 00:20:29.032 } 00:20:29.032 }, 00:20:29.032 { 00:20:29.032 "method": "bdev_nvme_set_hotplug", 00:20:29.032 "params": { 00:20:29.032 "period_us": 100000, 00:20:29.032 "enable": false 00:20:29.032 } 00:20:29.032 }, 00:20:29.032 { 00:20:29.032 "method": "bdev_wait_for_examine" 00:20:29.032 } 00:20:29.032 ] 00:20:29.032 }, 00:20:29.032 { 00:20:29.032 "subsystem": "nbd", 00:20:29.032 "config": [] 00:20:29.032 } 00:20:29.032 ] 00:20:29.032 }' 00:20:29.032 [2024-12-07 05:35:32.125432] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:29.032 [2024-12-07 05:35:32.125485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1856568 ] 00:20:29.032 EAL: No free 2048 kB hugepages reported on node 1 00:20:29.032 [2024-12-07 05:35:32.177478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.032 [2024-12-07 05:35:32.228452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:29.294 [2024-12-07 05:35:32.344150] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:29.867 05:35:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:29.867 05:35:32 -- common/autotest_common.sh@862 -- # return 0 00:20:29.867 05:35:32 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:29.867 Running I/O for 10 seconds... 00:20:39.870 00:20:39.870 Latency(us) 00:20:39.870 [2024-12-07T04:35:43.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.871 [2024-12-07T04:35:43.111Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:39.871 Verification LBA range: start 0x0 length 0x2000 00:20:39.871 TLSTESTn1 : 10.01 6507.27 25.42 0.00 0.00 19653.30 2853.55 50899.63 00:20:39.871 [2024-12-07T04:35:43.111Z] =================================================================================================================== 00:20:39.871 [2024-12-07T04:35:43.111Z] Total : 6507.27 25.42 0.00 0.00 19653.30 2853.55 50899.63 00:20:39.871 0 00:20:39.871 05:35:43 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:39.871 05:35:43 -- target/tls.sh@223 -- # killprocess 1856568 00:20:39.871 05:35:43 -- common/autotest_common.sh@936 -- # '[' -z 1856568 ']' 00:20:39.871 05:35:43 -- common/autotest_common.sh@940 -- # kill -0 1856568 00:20:39.871 05:35:43 -- common/autotest_common.sh@941 -- # uname 00:20:39.871 05:35:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:39.871 05:35:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1856568 00:20:39.871 05:35:43 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:39.871 05:35:43 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:39.871 05:35:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1856568' 00:20:39.871 killing process with pid 1856568 00:20:39.871 05:35:43 -- common/autotest_common.sh@955 -- # kill 1856568 00:20:39.871 Received shutdown signal, test time was about 10.000000 seconds 00:20:39.871 00:20:39.871 Latency(us) 00:20:39.871 [2024-12-07T04:35:43.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.871 [2024-12-07T04:35:43.111Z] =================================================================================================================== 00:20:39.871 [2024-12-07T04:35:43.111Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:39.871 05:35:43 -- common/autotest_common.sh@960 -- # wait 1856568 00:20:40.135 05:35:43 -- target/tls.sh@224 -- # killprocess 1856220 00:20:40.135 05:35:43 -- common/autotest_common.sh@936 -- # '[' -z 1856220 ']' 00:20:40.135 05:35:43 -- common/autotest_common.sh@940 -- # kill -0 1856220 00:20:40.135 05:35:43 -- common/autotest_common.sh@941 -- # uname 00:20:40.135 05:35:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:40.135 05:35:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1856220 00:20:40.135 05:35:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:40.135 05:35:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:40.135 05:35:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1856220' 00:20:40.135 killing process with pid 1856220 00:20:40.135 05:35:43 -- common/autotest_common.sh@955 -- # kill 1856220 00:20:40.135 05:35:43 -- common/autotest_common.sh@960 -- # wait 1856220 00:20:40.396 05:35:43 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:20:40.396 05:35:43 -- target/tls.sh@227 -- # cleanup 00:20:40.396 05:35:43 -- target/tls.sh@15 -- # process_shm --id 0 00:20:40.396 05:35:43 -- common/autotest_common.sh@806 -- # type=--id 00:20:40.396 05:35:43 -- common/autotest_common.sh@807 -- # id=0 00:20:40.396 05:35:43 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:40.396 05:35:43 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:40.396 05:35:43 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:40.396 05:35:43 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:40.396 05:35:43 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:40.396 05:35:43 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:40.396 nvmf_trace.0 00:20:40.396 05:35:43 -- common/autotest_common.sh@821 -- # return 0 00:20:40.396 05:35:43 -- target/tls.sh@16 -- # killprocess 1856568 00:20:40.396 05:35:43 -- common/autotest_common.sh@936 -- # '[' -z 1856568 ']' 00:20:40.396 05:35:43 -- common/autotest_common.sh@940 -- # kill -0 1856568 00:20:40.396 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1856568) - No such process 00:20:40.396 05:35:43 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1856568 is not found' 00:20:40.396 Process with pid 1856568 is not found 00:20:40.396 05:35:43 -- target/tls.sh@17 -- # nvmftestfini 00:20:40.396 05:35:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:40.396 05:35:43 -- nvmf/common.sh@116 -- # sync 00:20:40.396 05:35:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:40.396 05:35:43 -- nvmf/common.sh@119 -- # set +e 00:20:40.396 05:35:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:40.396 05:35:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:40.396 rmmod nvme_tcp 00:20:40.396 rmmod nvme_fabrics 00:20:40.396 rmmod nvme_keyring 00:20:40.396 05:35:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:40.396 05:35:43 -- nvmf/common.sh@123 -- # set -e 00:20:40.396 05:35:43 -- nvmf/common.sh@124 -- # return 0 00:20:40.396 05:35:43 -- nvmf/common.sh@477 -- # '[' -n 1856220 ']' 00:20:40.396 05:35:43 -- nvmf/common.sh@478 -- # killprocess 1856220 00:20:40.396 05:35:43 -- common/autotest_common.sh@936 -- # '[' -z 1856220 ']' 00:20:40.396 05:35:43 -- common/autotest_common.sh@940 -- # kill -0 1856220 00:20:40.396 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1856220) - No such process 00:20:40.396 05:35:43 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1856220 is not found' 00:20:40.396 Process with pid 1856220 is not found 00:20:40.396 05:35:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:40.396 05:35:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:40.396 05:35:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:40.396 05:35:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:40.396 05:35:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:40.396 05:35:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.396 05:35:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:40.396 05:35:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.941 05:35:45 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:42.942 05:35:45 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:42.942 00:20:42.942 real 1m12.469s 00:20:42.942 user 1m48.680s 00:20:42.942 sys 0m24.552s 00:20:42.942 05:35:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:42.942 05:35:45 -- common/autotest_common.sh@10 -- # set +x 00:20:42.942 ************************************ 00:20:42.942 END TEST nvmf_tls 00:20:42.942 ************************************ 00:20:42.942 05:35:45 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:42.942 05:35:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:42.942 05:35:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:42.942 05:35:45 -- common/autotest_common.sh@10 -- # set +x 00:20:42.942 ************************************ 00:20:42.942 START TEST nvmf_fips 00:20:42.942 ************************************ 00:20:42.942 05:35:45 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:42.942 * Looking for test storage... 00:20:42.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:42.942 05:35:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:42.942 05:35:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:42.942 05:35:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:42.942 05:35:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:42.942 05:35:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:42.942 05:35:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:42.942 05:35:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:42.942 05:35:45 -- scripts/common.sh@335 -- # IFS=.-: 00:20:42.942 05:35:45 -- scripts/common.sh@335 -- # read -ra ver1 00:20:42.942 05:35:45 -- scripts/common.sh@336 -- # IFS=.-: 00:20:42.942 05:35:45 -- scripts/common.sh@336 -- # read -ra ver2 00:20:42.942 05:35:45 -- scripts/common.sh@337 -- # local 'op=<' 00:20:42.942 05:35:45 -- scripts/common.sh@339 -- # ver1_l=2 00:20:42.942 05:35:45 -- scripts/common.sh@340 -- # ver2_l=1 00:20:42.942 05:35:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:42.942 05:35:45 -- scripts/common.sh@343 -- # case "$op" in 00:20:42.942 05:35:45 -- scripts/common.sh@344 -- # : 1 00:20:42.942 05:35:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:42.942 05:35:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.942 05:35:45 -- scripts/common.sh@364 -- # decimal 1 00:20:42.942 05:35:45 -- scripts/common.sh@352 -- # local d=1 00:20:42.942 05:35:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:42.942 05:35:45 -- scripts/common.sh@354 -- # echo 1 00:20:42.942 05:35:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:42.942 05:35:45 -- scripts/common.sh@365 -- # decimal 2 00:20:42.942 05:35:45 -- scripts/common.sh@352 -- # local d=2 00:20:42.942 05:35:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:42.942 05:35:45 -- scripts/common.sh@354 -- # echo 2 00:20:42.942 05:35:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:42.942 05:35:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:42.942 05:35:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:42.942 05:35:45 -- scripts/common.sh@367 -- # return 0 00:20:42.942 05:35:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:42.942 05:35:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:42.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.942 --rc genhtml_branch_coverage=1 00:20:42.942 --rc genhtml_function_coverage=1 00:20:42.942 --rc genhtml_legend=1 00:20:42.942 --rc geninfo_all_blocks=1 00:20:42.942 --rc geninfo_unexecuted_blocks=1 00:20:42.942 00:20:42.942 ' 00:20:42.942 05:35:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:42.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.942 --rc genhtml_branch_coverage=1 00:20:42.942 --rc genhtml_function_coverage=1 00:20:42.942 --rc genhtml_legend=1 00:20:42.942 --rc geninfo_all_blocks=1 00:20:42.942 --rc geninfo_unexecuted_blocks=1 00:20:42.942 00:20:42.942 ' 00:20:42.942 05:35:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:42.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.942 --rc genhtml_branch_coverage=1 00:20:42.942 --rc genhtml_function_coverage=1 00:20:42.942 --rc genhtml_legend=1 00:20:42.942 --rc geninfo_all_blocks=1 00:20:42.942 --rc geninfo_unexecuted_blocks=1 00:20:42.942 00:20:42.942 ' 00:20:42.942 05:35:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:42.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.942 --rc genhtml_branch_coverage=1 00:20:42.942 --rc genhtml_function_coverage=1 00:20:42.942 --rc genhtml_legend=1 00:20:42.942 --rc geninfo_all_blocks=1 00:20:42.942 --rc geninfo_unexecuted_blocks=1 00:20:42.942 00:20:42.942 ' 00:20:42.942 05:35:45 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:42.942 05:35:45 -- nvmf/common.sh@7 -- # uname -s 00:20:42.942 05:35:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.942 05:35:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.942 05:35:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.942 05:35:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.942 05:35:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.942 05:35:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.942 05:35:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.942 05:35:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.942 05:35:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.942 05:35:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.942 05:35:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:42.942 05:35:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:42.942 05:35:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.942 05:35:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.942 05:35:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:42.942 05:35:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:42.942 05:35:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.942 05:35:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.942 05:35:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.942 05:35:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.942 05:35:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.942 05:35:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.942 05:35:45 -- paths/export.sh@5 -- # export PATH 00:20:42.942 05:35:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.942 05:35:45 -- nvmf/common.sh@46 -- # : 0 00:20:42.942 05:35:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:42.942 05:35:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:42.942 05:35:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:42.942 05:35:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.942 05:35:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.942 05:35:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:42.942 05:35:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:42.942 05:35:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:42.942 05:35:45 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:42.942 05:35:45 -- fips/fips.sh@89 -- # check_openssl_version 00:20:42.942 05:35:45 -- fips/fips.sh@83 -- # local target=3.0.0 00:20:42.942 05:35:45 -- fips/fips.sh@85 -- # openssl version 00:20:42.942 05:35:45 -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:42.942 05:35:45 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:20:42.942 05:35:45 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:42.942 05:35:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:42.942 05:35:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:42.942 05:35:45 -- scripts/common.sh@335 -- # IFS=.-: 00:20:42.942 05:35:45 -- scripts/common.sh@335 -- # read -ra ver1 00:20:42.942 05:35:45 -- scripts/common.sh@336 -- # IFS=.-: 00:20:42.942 05:35:45 -- scripts/common.sh@336 -- # read -ra ver2 00:20:42.942 05:35:45 -- scripts/common.sh@337 -- # local 'op=>=' 00:20:42.942 05:35:45 -- scripts/common.sh@339 -- # ver1_l=3 00:20:42.942 05:35:45 -- scripts/common.sh@340 -- # ver2_l=3 00:20:42.942 05:35:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:42.942 05:35:45 -- scripts/common.sh@343 -- # case "$op" in 00:20:42.942 05:35:45 -- scripts/common.sh@347 -- # : 1 00:20:42.942 05:35:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:42.942 05:35:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.942 05:35:45 -- scripts/common.sh@364 -- # decimal 3 00:20:42.942 05:35:45 -- scripts/common.sh@352 -- # local d=3 00:20:42.942 05:35:45 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:42.942 05:35:45 -- scripts/common.sh@354 -- # echo 3 00:20:42.943 05:35:45 -- scripts/common.sh@364 -- # ver1[v]=3 00:20:42.943 05:35:45 -- scripts/common.sh@365 -- # decimal 3 00:20:42.943 05:35:45 -- scripts/common.sh@352 -- # local d=3 00:20:42.943 05:35:45 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:42.943 05:35:45 -- scripts/common.sh@354 -- # echo 3 00:20:42.943 05:35:45 -- scripts/common.sh@365 -- # ver2[v]=3 00:20:42.943 05:35:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:42.943 05:35:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:42.943 05:35:45 -- scripts/common.sh@363 -- # (( v++ )) 00:20:42.943 05:35:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.943 05:35:45 -- scripts/common.sh@364 -- # decimal 1 00:20:42.943 05:35:45 -- scripts/common.sh@352 -- # local d=1 00:20:42.943 05:35:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:42.943 05:35:45 -- scripts/common.sh@354 -- # echo 1 00:20:42.943 05:35:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:42.943 05:35:45 -- scripts/common.sh@365 -- # decimal 0 00:20:42.943 05:35:45 -- scripts/common.sh@352 -- # local d=0 00:20:42.943 05:35:45 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:42.943 05:35:45 -- scripts/common.sh@354 -- # echo 0 00:20:42.943 05:35:45 -- scripts/common.sh@365 -- # ver2[v]=0 00:20:42.943 05:35:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:42.943 05:35:45 -- scripts/common.sh@366 -- # return 0 00:20:42.943 05:35:45 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:42.943 05:35:45 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:42.943 05:35:45 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:42.943 05:35:45 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:42.943 05:35:45 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:42.943 05:35:45 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:42.943 05:35:45 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:42.943 05:35:45 -- fips/fips.sh@113 -- # build_openssl_config 00:20:42.943 05:35:45 -- fips/fips.sh@37 -- # cat 00:20:42.943 05:35:45 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:42.943 05:35:45 -- fips/fips.sh@58 -- # cat - 00:20:42.943 05:35:45 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:42.943 05:35:45 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:42.943 05:35:45 -- fips/fips.sh@116 -- # mapfile -t providers 00:20:42.943 05:35:45 -- fips/fips.sh@116 -- # openssl list -providers 00:20:42.943 05:35:45 -- fips/fips.sh@116 -- # grep name 00:20:42.943 05:35:46 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:42.943 05:35:46 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:42.943 05:35:46 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:42.943 05:35:46 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:42.943 05:35:46 -- common/autotest_common.sh@650 -- # local es=0 00:20:42.943 05:35:46 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:42.943 05:35:46 -- fips/fips.sh@127 -- # : 00:20:42.943 05:35:46 -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:42.943 05:35:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:42.943 05:35:46 -- common/autotest_common.sh@642 -- # type -t openssl 00:20:42.943 05:35:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:42.943 05:35:46 -- common/autotest_common.sh@644 -- # type -P openssl 00:20:42.943 05:35:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:42.943 05:35:46 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:42.943 05:35:46 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:42.943 05:35:46 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:42.943 Error setting digest 00:20:42.943 406242D0A17F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:42.943 406242D0A17F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:42.943 05:35:46 -- common/autotest_common.sh@653 -- # es=1 00:20:42.943 05:35:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:42.943 05:35:46 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:42.943 05:35:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:42.943 05:35:46 -- fips/fips.sh@130 -- # nvmftestinit 00:20:42.943 05:35:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:42.943 05:35:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.943 05:35:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:42.943 05:35:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:42.943 05:35:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:42.943 05:35:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.943 05:35:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.943 05:35:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.943 05:35:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:42.943 05:35:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:42.943 05:35:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:42.943 05:35:46 -- common/autotest_common.sh@10 -- # set +x 00:20:51.086 05:35:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:51.086 05:35:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:51.086 05:35:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:51.086 05:35:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:51.086 05:35:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:51.086 05:35:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:51.086 05:35:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:51.086 05:35:53 -- nvmf/common.sh@294 -- # net_devs=() 00:20:51.086 05:35:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:51.086 05:35:53 -- nvmf/common.sh@295 -- # e810=() 00:20:51.086 05:35:53 -- nvmf/common.sh@295 -- # local -ga e810 00:20:51.086 05:35:53 -- nvmf/common.sh@296 -- # x722=() 00:20:51.086 05:35:53 -- nvmf/common.sh@296 -- # local -ga x722 00:20:51.086 05:35:53 -- nvmf/common.sh@297 -- # mlx=() 00:20:51.086 05:35:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:51.086 05:35:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:51.086 05:35:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:51.086 05:35:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:51.086 05:35:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:51.086 05:35:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:51.086 05:35:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:51.086 05:35:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:51.086 05:35:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:51.086 05:35:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:51.086 05:35:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:51.086 05:35:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:51.086 05:35:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:51.086 05:35:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:51.086 05:35:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:51.086 05:35:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:51.086 05:35:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:51.086 05:35:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:51.086 05:35:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:51.086 05:35:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:51.086 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:51.086 05:35:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:51.086 05:35:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:51.086 05:35:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.086 05:35:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.086 05:35:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:51.086 05:35:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:51.086 05:35:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:51.086 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:51.086 05:35:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:51.086 05:35:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:51.086 05:35:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.086 05:35:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.086 05:35:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:51.086 05:35:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:51.086 05:35:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:51.086 05:35:53 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:51.086 05:35:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:51.086 05:35:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.086 05:35:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:51.086 05:35:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.086 05:35:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:51.086 Found net devices under 0000:31:00.0: cvl_0_0 00:20:51.087 05:35:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.087 05:35:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:51.087 05:35:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.087 05:35:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:51.087 05:35:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.087 05:35:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:51.087 Found net devices under 0000:31:00.1: cvl_0_1 00:20:51.087 05:35:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.087 05:35:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:51.087 05:35:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:51.087 05:35:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:51.087 05:35:53 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:51.087 05:35:53 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:51.087 05:35:53 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:51.087 05:35:53 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:51.087 05:35:53 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:51.087 05:35:53 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:51.087 05:35:53 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:51.087 05:35:53 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:51.087 05:35:53 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:51.087 05:35:53 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:51.087 05:35:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:51.087 05:35:53 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:51.087 05:35:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:51.087 05:35:53 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:51.087 05:35:53 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:51.087 05:35:53 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:51.087 05:35:53 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:51.087 05:35:53 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:51.087 05:35:53 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:51.087 05:35:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:51.087 05:35:53 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:51.087 05:35:53 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:51.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:51.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.525 ms 00:20:51.087 00:20:51.087 --- 10.0.0.2 ping statistics --- 00:20:51.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.087 rtt min/avg/max/mdev = 0.525/0.525/0.525/0.000 ms 00:20:51.087 05:35:53 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:51.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:51.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:20:51.087 00:20:51.087 --- 10.0.0.1 ping statistics --- 00:20:51.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.087 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:20:51.087 05:35:53 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:51.087 05:35:53 -- nvmf/common.sh@410 -- # return 0 00:20:51.087 05:35:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:51.087 05:35:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:51.087 05:35:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:51.087 05:35:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:51.087 05:35:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:51.087 05:35:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:51.087 05:35:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:51.087 05:35:53 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:51.087 05:35:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:51.087 05:35:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:51.087 05:35:53 -- common/autotest_common.sh@10 -- # set +x 00:20:51.087 05:35:53 -- nvmf/common.sh@469 -- # nvmfpid=1863044 00:20:51.087 05:35:53 -- nvmf/common.sh@470 -- # waitforlisten 1863044 00:20:51.087 05:35:53 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:51.087 05:35:53 -- common/autotest_common.sh@829 -- # '[' -z 1863044 ']' 00:20:51.087 05:35:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.087 05:35:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:51.087 05:35:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.087 05:35:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:51.087 05:35:53 -- common/autotest_common.sh@10 -- # set +x 00:20:51.087 [2024-12-07 05:35:53.703773] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:51.087 [2024-12-07 05:35:53.703845] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:51.087 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.087 [2024-12-07 05:35:53.793764] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.087 [2024-12-07 05:35:53.883843] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:51.087 [2024-12-07 05:35:53.883993] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:51.087 [2024-12-07 05:35:53.884003] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:51.087 [2024-12-07 05:35:53.884023] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:51.087 [2024-12-07 05:35:53.884048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.348 05:35:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:51.348 05:35:54 -- common/autotest_common.sh@862 -- # return 0 00:20:51.348 05:35:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:51.348 05:35:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:51.348 05:35:54 -- common/autotest_common.sh@10 -- # set +x 00:20:51.348 05:35:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:51.349 05:35:54 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:51.349 05:35:54 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:51.349 05:35:54 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:51.349 05:35:54 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:51.349 05:35:54 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:51.349 05:35:54 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:51.349 05:35:54 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:51.349 05:35:54 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:51.610 [2024-12-07 05:35:54.687648] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:51.610 [2024-12-07 05:35:54.703654] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:51.610 [2024-12-07 05:35:54.703961] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:51.610 malloc0 00:20:51.610 05:35:54 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:51.610 05:35:54 -- fips/fips.sh@147 -- # bdevperf_pid=1863375 00:20:51.610 05:35:54 -- fips/fips.sh@148 -- # waitforlisten 1863375 /var/tmp/bdevperf.sock 00:20:51.610 05:35:54 -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:51.610 05:35:54 -- common/autotest_common.sh@829 -- # '[' -z 1863375 ']' 00:20:51.610 05:35:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:51.610 05:35:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:51.610 05:35:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:51.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:51.610 05:35:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:51.610 05:35:54 -- common/autotest_common.sh@10 -- # set +x 00:20:51.610 [2024-12-07 05:35:54.828867] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:51.610 [2024-12-07 05:35:54.828940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1863375 ] 00:20:51.872 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.872 [2024-12-07 05:35:54.885696] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.872 [2024-12-07 05:35:54.947787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.441 05:35:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:52.441 05:35:55 -- common/autotest_common.sh@862 -- # return 0 00:20:52.441 05:35:55 -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:52.701 [2024-12-07 05:35:55.751591] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:52.701 TLSTESTn1 00:20:52.701 05:35:55 -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:52.701 Running I/O for 10 seconds... 00:21:04.933 00:21:04.933 Latency(us) 00:21:04.933 [2024-12-07T04:36:08.173Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.933 [2024-12-07T04:36:08.173Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:04.933 Verification LBA range: start 0x0 length 0x2000 00:21:04.933 TLSTESTn1 : 10.02 6862.92 26.81 0.00 0.00 18631.35 3904.85 51118.08 00:21:04.933 [2024-12-07T04:36:08.173Z] =================================================================================================================== 00:21:04.933 [2024-12-07T04:36:08.173Z] Total : 6862.92 26.81 0.00 0.00 18631.35 3904.85 51118.08 00:21:04.933 0 00:21:04.933 05:36:05 -- fips/fips.sh@1 -- # cleanup 00:21:04.933 05:36:05 -- fips/fips.sh@15 -- # process_shm --id 0 00:21:04.933 05:36:05 -- common/autotest_common.sh@806 -- # type=--id 00:21:04.933 05:36:05 -- common/autotest_common.sh@807 -- # id=0 00:21:04.933 05:36:05 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:04.933 05:36:05 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:04.933 05:36:05 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:04.933 05:36:05 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:04.933 05:36:05 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:04.933 05:36:05 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:04.933 nvmf_trace.0 00:21:04.933 05:36:06 -- common/autotest_common.sh@821 -- # return 0 00:21:04.933 05:36:06 -- fips/fips.sh@16 -- # killprocess 1863375 00:21:04.933 05:36:06 -- common/autotest_common.sh@936 -- # '[' -z 1863375 ']' 00:21:04.933 05:36:06 -- common/autotest_common.sh@940 -- # kill -0 1863375 00:21:04.933 05:36:06 -- common/autotest_common.sh@941 -- # uname 00:21:04.933 05:36:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:04.933 05:36:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1863375 00:21:04.933 05:36:06 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:04.933 05:36:06 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:04.933 05:36:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1863375' 00:21:04.933 killing process with pid 1863375 00:21:04.933 05:36:06 -- common/autotest_common.sh@955 -- # kill 1863375 00:21:04.933 Received shutdown signal, test time was about 10.000000 seconds 00:21:04.933 00:21:04.933 Latency(us) 00:21:04.933 [2024-12-07T04:36:08.173Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.933 [2024-12-07T04:36:08.173Z] =================================================================================================================== 00:21:04.933 [2024-12-07T04:36:08.173Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:04.933 05:36:06 -- common/autotest_common.sh@960 -- # wait 1863375 00:21:04.933 05:36:06 -- fips/fips.sh@17 -- # nvmftestfini 00:21:04.933 05:36:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:04.933 05:36:06 -- nvmf/common.sh@116 -- # sync 00:21:04.933 05:36:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:04.933 05:36:06 -- nvmf/common.sh@119 -- # set +e 00:21:04.933 05:36:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:04.933 05:36:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:04.933 rmmod nvme_tcp 00:21:04.933 rmmod nvme_fabrics 00:21:04.933 rmmod nvme_keyring 00:21:04.933 05:36:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:04.933 05:36:06 -- nvmf/common.sh@123 -- # set -e 00:21:04.933 05:36:06 -- nvmf/common.sh@124 -- # return 0 00:21:04.933 05:36:06 -- nvmf/common.sh@477 -- # '[' -n 1863044 ']' 00:21:04.933 05:36:06 -- nvmf/common.sh@478 -- # killprocess 1863044 00:21:04.933 05:36:06 -- common/autotest_common.sh@936 -- # '[' -z 1863044 ']' 00:21:04.933 05:36:06 -- common/autotest_common.sh@940 -- # kill -0 1863044 00:21:04.933 05:36:06 -- common/autotest_common.sh@941 -- # uname 00:21:04.933 05:36:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:04.933 05:36:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1863044 00:21:04.933 05:36:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:04.933 05:36:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:04.933 05:36:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1863044' 00:21:04.933 killing process with pid 1863044 00:21:04.933 05:36:06 -- common/autotest_common.sh@955 -- # kill 1863044 00:21:04.933 05:36:06 -- common/autotest_common.sh@960 -- # wait 1863044 00:21:04.933 05:36:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:04.933 05:36:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:04.933 05:36:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:04.933 05:36:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:04.933 05:36:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:04.933 05:36:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.933 05:36:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:04.933 05:36:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.506 05:36:08 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:05.506 05:36:08 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:05.506 00:21:05.506 real 0m22.894s 00:21:05.506 user 0m23.522s 00:21:05.506 sys 0m9.837s 00:21:05.506 05:36:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:05.506 05:36:08 -- common/autotest_common.sh@10 -- # set +x 00:21:05.506 ************************************ 00:21:05.506 END TEST nvmf_fips 00:21:05.506 ************************************ 00:21:05.506 05:36:08 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:21:05.506 05:36:08 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:05.506 05:36:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:05.506 05:36:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:05.506 05:36:08 -- common/autotest_common.sh@10 -- # set +x 00:21:05.506 ************************************ 00:21:05.506 START TEST nvmf_fuzz 00:21:05.506 ************************************ 00:21:05.506 05:36:08 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:05.506 * Looking for test storage... 00:21:05.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:05.506 05:36:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:05.506 05:36:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:05.506 05:36:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:05.768 05:36:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:05.768 05:36:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:05.768 05:36:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:05.768 05:36:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:05.768 05:36:08 -- scripts/common.sh@335 -- # IFS=.-: 00:21:05.768 05:36:08 -- scripts/common.sh@335 -- # read -ra ver1 00:21:05.768 05:36:08 -- scripts/common.sh@336 -- # IFS=.-: 00:21:05.768 05:36:08 -- scripts/common.sh@336 -- # read -ra ver2 00:21:05.768 05:36:08 -- scripts/common.sh@337 -- # local 'op=<' 00:21:05.768 05:36:08 -- scripts/common.sh@339 -- # ver1_l=2 00:21:05.768 05:36:08 -- scripts/common.sh@340 -- # ver2_l=1 00:21:05.768 05:36:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:05.768 05:36:08 -- scripts/common.sh@343 -- # case "$op" in 00:21:05.768 05:36:08 -- scripts/common.sh@344 -- # : 1 00:21:05.768 05:36:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:05.768 05:36:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:05.768 05:36:08 -- scripts/common.sh@364 -- # decimal 1 00:21:05.768 05:36:08 -- scripts/common.sh@352 -- # local d=1 00:21:05.768 05:36:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:05.768 05:36:08 -- scripts/common.sh@354 -- # echo 1 00:21:05.768 05:36:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:05.768 05:36:08 -- scripts/common.sh@365 -- # decimal 2 00:21:05.768 05:36:08 -- scripts/common.sh@352 -- # local d=2 00:21:05.768 05:36:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:05.768 05:36:08 -- scripts/common.sh@354 -- # echo 2 00:21:05.768 05:36:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:05.768 05:36:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:05.768 05:36:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:05.768 05:36:08 -- scripts/common.sh@367 -- # return 0 00:21:05.768 05:36:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:05.768 05:36:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:05.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.768 --rc genhtml_branch_coverage=1 00:21:05.768 --rc genhtml_function_coverage=1 00:21:05.768 --rc genhtml_legend=1 00:21:05.768 --rc geninfo_all_blocks=1 00:21:05.768 --rc geninfo_unexecuted_blocks=1 00:21:05.768 00:21:05.768 ' 00:21:05.768 05:36:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:05.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.768 --rc genhtml_branch_coverage=1 00:21:05.768 --rc genhtml_function_coverage=1 00:21:05.768 --rc genhtml_legend=1 00:21:05.768 --rc geninfo_all_blocks=1 00:21:05.768 --rc geninfo_unexecuted_blocks=1 00:21:05.768 00:21:05.768 ' 00:21:05.768 05:36:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:05.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.768 --rc genhtml_branch_coverage=1 00:21:05.768 --rc genhtml_function_coverage=1 00:21:05.768 --rc genhtml_legend=1 00:21:05.768 --rc geninfo_all_blocks=1 00:21:05.768 --rc geninfo_unexecuted_blocks=1 00:21:05.768 00:21:05.768 ' 00:21:05.768 05:36:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:05.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.768 --rc genhtml_branch_coverage=1 00:21:05.768 --rc genhtml_function_coverage=1 00:21:05.768 --rc genhtml_legend=1 00:21:05.768 --rc geninfo_all_blocks=1 00:21:05.768 --rc geninfo_unexecuted_blocks=1 00:21:05.768 00:21:05.768 ' 00:21:05.768 05:36:08 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:05.768 05:36:08 -- nvmf/common.sh@7 -- # uname -s 00:21:05.768 05:36:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:05.768 05:36:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:05.768 05:36:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:05.768 05:36:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:05.768 05:36:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:05.768 05:36:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:05.768 05:36:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:05.768 05:36:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:05.768 05:36:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:05.768 05:36:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:05.768 05:36:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:05.768 05:36:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:05.768 05:36:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:05.768 05:36:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:05.768 05:36:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:05.768 05:36:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:05.768 05:36:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:05.768 05:36:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:05.768 05:36:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:05.768 05:36:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.768 05:36:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.768 05:36:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.768 05:36:08 -- paths/export.sh@5 -- # export PATH 00:21:05.768 05:36:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.768 05:36:08 -- nvmf/common.sh@46 -- # : 0 00:21:05.768 05:36:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:05.768 05:36:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:05.768 05:36:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:05.768 05:36:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:05.768 05:36:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:05.768 05:36:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:05.768 05:36:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:05.768 05:36:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:05.768 05:36:08 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:21:05.768 05:36:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:05.768 05:36:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.768 05:36:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:05.768 05:36:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:05.768 05:36:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:05.768 05:36:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.768 05:36:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:05.768 05:36:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.768 05:36:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:05.768 05:36:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:05.768 05:36:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:05.768 05:36:08 -- common/autotest_common.sh@10 -- # set +x 00:21:13.914 05:36:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:13.914 05:36:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:13.914 05:36:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:13.914 05:36:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:13.914 05:36:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:13.914 05:36:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:13.914 05:36:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:13.914 05:36:16 -- nvmf/common.sh@294 -- # net_devs=() 00:21:13.914 05:36:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:13.914 05:36:16 -- nvmf/common.sh@295 -- # e810=() 00:21:13.914 05:36:16 -- nvmf/common.sh@295 -- # local -ga e810 00:21:13.914 05:36:16 -- nvmf/common.sh@296 -- # x722=() 00:21:13.914 05:36:16 -- nvmf/common.sh@296 -- # local -ga x722 00:21:13.914 05:36:16 -- nvmf/common.sh@297 -- # mlx=() 00:21:13.914 05:36:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:13.914 05:36:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:13.914 05:36:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:13.914 05:36:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:13.914 05:36:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:13.914 05:36:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:13.914 05:36:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:13.914 05:36:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:13.914 05:36:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:13.914 05:36:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:13.914 05:36:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:13.914 05:36:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:13.914 05:36:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:13.914 05:36:16 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:13.914 05:36:16 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:13.915 05:36:16 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:13.915 05:36:16 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:13.915 05:36:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:13.915 05:36:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:13.915 05:36:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:13.915 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:13.915 05:36:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:13.915 05:36:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:13.915 05:36:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.915 05:36:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.915 05:36:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:13.915 05:36:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:13.915 05:36:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:13.915 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:13.915 05:36:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:13.915 05:36:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:13.915 05:36:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.915 05:36:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.915 05:36:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:13.915 05:36:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:13.915 05:36:16 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:13.915 05:36:16 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:13.915 05:36:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:13.915 05:36:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.915 05:36:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:13.915 05:36:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.915 05:36:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:13.915 Found net devices under 0000:31:00.0: cvl_0_0 00:21:13.915 05:36:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.915 05:36:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:13.915 05:36:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.915 05:36:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:13.915 05:36:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.915 05:36:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:13.915 Found net devices under 0000:31:00.1: cvl_0_1 00:21:13.915 05:36:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.915 05:36:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:13.915 05:36:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:13.915 05:36:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:13.915 05:36:16 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:13.915 05:36:16 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:13.915 05:36:16 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:13.915 05:36:16 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:13.915 05:36:16 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:13.915 05:36:16 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:13.915 05:36:16 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:13.915 05:36:16 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:13.915 05:36:16 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:13.915 05:36:16 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:13.915 05:36:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:13.915 05:36:16 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:13.915 05:36:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:13.915 05:36:16 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:13.915 05:36:16 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:13.915 05:36:16 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:13.915 05:36:16 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:13.915 05:36:16 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:13.915 05:36:16 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:13.915 05:36:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:13.915 05:36:16 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:13.915 05:36:16 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:13.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:13.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.581 ms 00:21:13.915 00:21:13.915 --- 10.0.0.2 ping statistics --- 00:21:13.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.915 rtt min/avg/max/mdev = 0.581/0.581/0.581/0.000 ms 00:21:13.915 05:36:16 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:13.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:13.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:21:13.915 00:21:13.915 --- 10.0.0.1 ping statistics --- 00:21:13.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.915 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:21:13.915 05:36:16 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:13.915 05:36:16 -- nvmf/common.sh@410 -- # return 0 00:21:13.915 05:36:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:13.915 05:36:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:13.915 05:36:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:13.915 05:36:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:13.915 05:36:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:13.915 05:36:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:13.915 05:36:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:13.915 05:36:16 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1870403 00:21:13.915 05:36:16 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:13.915 05:36:16 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:13.915 05:36:16 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1870403 00:21:13.915 05:36:16 -- common/autotest_common.sh@829 -- # '[' -z 1870403 ']' 00:21:13.915 05:36:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.915 05:36:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:13.915 05:36:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.915 05:36:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:13.915 05:36:16 -- common/autotest_common.sh@10 -- # set +x 00:21:14.176 05:36:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:14.176 05:36:17 -- common/autotest_common.sh@862 -- # return 0 00:21:14.176 05:36:17 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:14.176 05:36:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.176 05:36:17 -- common/autotest_common.sh@10 -- # set +x 00:21:14.176 05:36:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.176 05:36:17 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:21:14.176 05:36:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.176 05:36:17 -- common/autotest_common.sh@10 -- # set +x 00:21:14.176 Malloc0 00:21:14.176 05:36:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.176 05:36:17 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:14.176 05:36:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.176 05:36:17 -- common/autotest_common.sh@10 -- # set +x 00:21:14.176 05:36:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.176 05:36:17 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:14.176 05:36:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.176 05:36:17 -- common/autotest_common.sh@10 -- # set +x 00:21:14.176 05:36:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.176 05:36:17 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:14.176 05:36:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.176 05:36:17 -- common/autotest_common.sh@10 -- # set +x 00:21:14.176 05:36:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.176 05:36:17 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:21:14.176 05:36:17 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:21:46.416 Fuzzing completed. Shutting down the fuzz application 00:21:46.416 00:21:46.416 Dumping successful admin opcodes: 00:21:46.416 8, 9, 10, 24, 00:21:46.416 Dumping successful io opcodes: 00:21:46.416 0, 9, 00:21:46.416 NS: 0x200003aeff00 I/O qp, Total commands completed: 954690, total successful commands: 5582, random_seed: 1385332416 00:21:46.416 NS: 0x200003aeff00 admin qp, Total commands completed: 120110, total successful commands: 984, random_seed: 4063733888 00:21:46.416 05:36:47 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:21:46.416 Fuzzing completed. Shutting down the fuzz application 00:21:46.416 00:21:46.416 Dumping successful admin opcodes: 00:21:46.416 24, 00:21:46.416 Dumping successful io opcodes: 00:21:46.416 00:21:46.416 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 61622953 00:21:46.416 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 61704077 00:21:46.416 05:36:48 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:46.416 05:36:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.416 05:36:48 -- common/autotest_common.sh@10 -- # set +x 00:21:46.416 05:36:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.416 05:36:48 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:21:46.416 05:36:48 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:21:46.416 05:36:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:46.416 05:36:48 -- nvmf/common.sh@116 -- # sync 00:21:46.416 05:36:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:46.416 05:36:48 -- nvmf/common.sh@119 -- # set +e 00:21:46.416 05:36:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:46.416 05:36:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:46.416 rmmod nvme_tcp 00:21:46.416 rmmod nvme_fabrics 00:21:46.416 rmmod nvme_keyring 00:21:46.416 05:36:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:46.416 05:36:48 -- nvmf/common.sh@123 -- # set -e 00:21:46.416 05:36:48 -- nvmf/common.sh@124 -- # return 0 00:21:46.416 05:36:48 -- nvmf/common.sh@477 -- # '[' -n 1870403 ']' 00:21:46.416 05:36:48 -- nvmf/common.sh@478 -- # killprocess 1870403 00:21:46.416 05:36:48 -- common/autotest_common.sh@936 -- # '[' -z 1870403 ']' 00:21:46.416 05:36:48 -- common/autotest_common.sh@940 -- # kill -0 1870403 00:21:46.416 05:36:48 -- common/autotest_common.sh@941 -- # uname 00:21:46.416 05:36:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:46.416 05:36:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1870403 00:21:46.416 05:36:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:46.416 05:36:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:46.416 05:36:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1870403' 00:21:46.416 killing process with pid 1870403 00:21:46.416 05:36:48 -- common/autotest_common.sh@955 -- # kill 1870403 00:21:46.416 05:36:48 -- common/autotest_common.sh@960 -- # wait 1870403 00:21:46.416 05:36:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:46.416 05:36:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:46.416 05:36:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:46.416 05:36:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:46.416 05:36:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:46.416 05:36:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.416 05:36:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:46.416 05:36:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.327 05:36:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:48.327 05:36:51 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:21:48.327 00:21:48.327 real 0m42.614s 00:21:48.327 user 0m57.090s 00:21:48.327 sys 0m14.729s 00:21:48.327 05:36:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:48.327 05:36:51 -- common/autotest_common.sh@10 -- # set +x 00:21:48.327 ************************************ 00:21:48.327 END TEST nvmf_fuzz 00:21:48.327 ************************************ 00:21:48.327 05:36:51 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:48.327 05:36:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:48.327 05:36:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:48.327 05:36:51 -- common/autotest_common.sh@10 -- # set +x 00:21:48.327 ************************************ 00:21:48.327 START TEST nvmf_multiconnection 00:21:48.327 ************************************ 00:21:48.327 05:36:51 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:48.327 * Looking for test storage... 00:21:48.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:48.327 05:36:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:48.327 05:36:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:48.327 05:36:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:48.327 05:36:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:48.327 05:36:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:48.327 05:36:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:48.327 05:36:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:48.327 05:36:51 -- scripts/common.sh@335 -- # IFS=.-: 00:21:48.327 05:36:51 -- scripts/common.sh@335 -- # read -ra ver1 00:21:48.327 05:36:51 -- scripts/common.sh@336 -- # IFS=.-: 00:21:48.327 05:36:51 -- scripts/common.sh@336 -- # read -ra ver2 00:21:48.327 05:36:51 -- scripts/common.sh@337 -- # local 'op=<' 00:21:48.327 05:36:51 -- scripts/common.sh@339 -- # ver1_l=2 00:21:48.327 05:36:51 -- scripts/common.sh@340 -- # ver2_l=1 00:21:48.327 05:36:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:48.327 05:36:51 -- scripts/common.sh@343 -- # case "$op" in 00:21:48.327 05:36:51 -- scripts/common.sh@344 -- # : 1 00:21:48.327 05:36:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:48.327 05:36:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:48.327 05:36:51 -- scripts/common.sh@364 -- # decimal 1 00:21:48.327 05:36:51 -- scripts/common.sh@352 -- # local d=1 00:21:48.327 05:36:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:48.327 05:36:51 -- scripts/common.sh@354 -- # echo 1 00:21:48.327 05:36:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:48.327 05:36:51 -- scripts/common.sh@365 -- # decimal 2 00:21:48.327 05:36:51 -- scripts/common.sh@352 -- # local d=2 00:21:48.327 05:36:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:48.327 05:36:51 -- scripts/common.sh@354 -- # echo 2 00:21:48.327 05:36:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:48.327 05:36:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:48.327 05:36:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:48.327 05:36:51 -- scripts/common.sh@367 -- # return 0 00:21:48.327 05:36:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:48.327 05:36:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:48.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.327 --rc genhtml_branch_coverage=1 00:21:48.327 --rc genhtml_function_coverage=1 00:21:48.327 --rc genhtml_legend=1 00:21:48.327 --rc geninfo_all_blocks=1 00:21:48.327 --rc geninfo_unexecuted_blocks=1 00:21:48.327 00:21:48.327 ' 00:21:48.327 05:36:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:48.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.327 --rc genhtml_branch_coverage=1 00:21:48.327 --rc genhtml_function_coverage=1 00:21:48.327 --rc genhtml_legend=1 00:21:48.327 --rc geninfo_all_blocks=1 00:21:48.327 --rc geninfo_unexecuted_blocks=1 00:21:48.327 00:21:48.327 ' 00:21:48.327 05:36:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:48.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.327 --rc genhtml_branch_coverage=1 00:21:48.327 --rc genhtml_function_coverage=1 00:21:48.327 --rc genhtml_legend=1 00:21:48.327 --rc geninfo_all_blocks=1 00:21:48.327 --rc geninfo_unexecuted_blocks=1 00:21:48.327 00:21:48.327 ' 00:21:48.327 05:36:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:48.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.327 --rc genhtml_branch_coverage=1 00:21:48.327 --rc genhtml_function_coverage=1 00:21:48.327 --rc genhtml_legend=1 00:21:48.327 --rc geninfo_all_blocks=1 00:21:48.327 --rc geninfo_unexecuted_blocks=1 00:21:48.327 00:21:48.327 ' 00:21:48.327 05:36:51 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:48.327 05:36:51 -- nvmf/common.sh@7 -- # uname -s 00:21:48.327 05:36:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:48.327 05:36:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:48.327 05:36:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:48.327 05:36:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:48.327 05:36:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:48.327 05:36:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:48.327 05:36:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:48.327 05:36:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:48.327 05:36:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:48.327 05:36:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:48.327 05:36:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:48.327 05:36:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:48.327 05:36:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:48.327 05:36:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:48.327 05:36:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:48.327 05:36:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:48.327 05:36:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:48.327 05:36:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:48.327 05:36:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:48.327 05:36:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.327 05:36:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.327 05:36:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.327 05:36:51 -- paths/export.sh@5 -- # export PATH 00:21:48.327 05:36:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.327 05:36:51 -- nvmf/common.sh@46 -- # : 0 00:21:48.328 05:36:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:48.328 05:36:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:48.328 05:36:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:48.328 05:36:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:48.328 05:36:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:48.328 05:36:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:48.328 05:36:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:48.328 05:36:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:48.328 05:36:51 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:48.328 05:36:51 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:48.328 05:36:51 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:21:48.328 05:36:51 -- target/multiconnection.sh@16 -- # nvmftestinit 00:21:48.328 05:36:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:48.328 05:36:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:48.328 05:36:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:48.328 05:36:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:48.328 05:36:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:48.328 05:36:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.328 05:36:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:48.328 05:36:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.328 05:36:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:48.328 05:36:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:48.328 05:36:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:48.328 05:36:51 -- common/autotest_common.sh@10 -- # set +x 00:21:56.511 05:36:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:56.511 05:36:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:56.511 05:36:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:56.511 05:36:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:56.511 05:36:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:56.511 05:36:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:56.511 05:36:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:56.511 05:36:58 -- nvmf/common.sh@294 -- # net_devs=() 00:21:56.511 05:36:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:56.511 05:36:58 -- nvmf/common.sh@295 -- # e810=() 00:21:56.511 05:36:58 -- nvmf/common.sh@295 -- # local -ga e810 00:21:56.511 05:36:58 -- nvmf/common.sh@296 -- # x722=() 00:21:56.511 05:36:58 -- nvmf/common.sh@296 -- # local -ga x722 00:21:56.511 05:36:58 -- nvmf/common.sh@297 -- # mlx=() 00:21:56.511 05:36:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:56.512 05:36:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:56.512 05:36:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:56.512 05:36:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:56.512 05:36:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:56.512 05:36:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:56.512 05:36:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:56.512 05:36:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:56.512 05:36:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:56.512 05:36:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:56.512 05:36:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:56.512 05:36:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:56.512 05:36:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:56.512 05:36:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:56.512 05:36:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:56.512 05:36:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:56.512 05:36:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:56.512 05:36:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:56.512 05:36:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:56.512 05:36:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:56.512 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:56.512 05:36:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:56.512 05:36:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:56.512 05:36:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.512 05:36:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.512 05:36:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:56.512 05:36:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:56.512 05:36:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:56.512 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:56.512 05:36:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:56.512 05:36:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:56.512 05:36:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.512 05:36:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.512 05:36:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:56.512 05:36:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:56.512 05:36:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:56.512 05:36:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:56.512 05:36:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:56.512 05:36:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.512 05:36:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:56.512 05:36:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.512 05:36:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:56.512 Found net devices under 0000:31:00.0: cvl_0_0 00:21:56.512 05:36:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.512 05:36:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:56.512 05:36:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.512 05:36:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:56.512 05:36:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.512 05:36:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:56.512 Found net devices under 0000:31:00.1: cvl_0_1 00:21:56.512 05:36:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.512 05:36:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:56.512 05:36:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:56.512 05:36:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:56.512 05:36:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:56.512 05:36:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:56.512 05:36:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:56.512 05:36:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:56.512 05:36:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:56.512 05:36:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:56.512 05:36:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:56.512 05:36:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:56.512 05:36:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:56.512 05:36:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:56.512 05:36:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:56.512 05:36:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:56.512 05:36:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:56.512 05:36:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:56.512 05:36:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:56.512 05:36:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:56.512 05:36:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:56.512 05:36:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:56.512 05:36:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:56.512 05:36:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:56.512 05:36:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:56.512 05:36:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:56.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:56.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:21:56.512 00:21:56.512 --- 10.0.0.2 ping statistics --- 00:21:56.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.512 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:21:56.512 05:36:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:56.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:56.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:21:56.512 00:21:56.512 --- 10.0.0.1 ping statistics --- 00:21:56.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.512 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:21:56.512 05:36:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:56.512 05:36:58 -- nvmf/common.sh@410 -- # return 0 00:21:56.512 05:36:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:56.512 05:36:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:56.512 05:36:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:56.512 05:36:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:56.512 05:36:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:56.512 05:36:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:56.512 05:36:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:56.512 05:36:58 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:21:56.512 05:36:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:56.512 05:36:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:56.512 05:36:58 -- common/autotest_common.sh@10 -- # set +x 00:21:56.512 05:36:58 -- nvmf/common.sh@469 -- # nvmfpid=1881110 00:21:56.512 05:36:58 -- nvmf/common.sh@470 -- # waitforlisten 1881110 00:21:56.512 05:36:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:56.512 05:36:58 -- common/autotest_common.sh@829 -- # '[' -z 1881110 ']' 00:21:56.512 05:36:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.512 05:36:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:56.512 05:36:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.512 05:36:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:56.512 05:36:58 -- common/autotest_common.sh@10 -- # set +x 00:21:56.512 [2024-12-07 05:36:59.035220] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:56.512 [2024-12-07 05:36:59.035272] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.512 EAL: No free 2048 kB hugepages reported on node 1 00:21:56.512 [2024-12-07 05:36:59.105300] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:56.512 [2024-12-07 05:36:59.172505] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:56.512 [2024-12-07 05:36:59.172634] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:56.512 [2024-12-07 05:36:59.172645] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:56.512 [2024-12-07 05:36:59.172653] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:56.512 [2024-12-07 05:36:59.172795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:56.512 [2024-12-07 05:36:59.172896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:56.512 [2024-12-07 05:36:59.173055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.512 [2024-12-07 05:36:59.173055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:56.775 05:36:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:56.775 05:36:59 -- common/autotest_common.sh@862 -- # return 0 00:21:56.775 05:36:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:56.775 05:36:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:56.775 05:36:59 -- common/autotest_common.sh@10 -- # set +x 00:21:56.775 05:36:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:56.775 05:36:59 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:56.775 05:36:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.775 05:36:59 -- common/autotest_common.sh@10 -- # set +x 00:21:56.775 [2024-12-07 05:36:59.874263] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:56.775 05:36:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.775 05:36:59 -- target/multiconnection.sh@21 -- # seq 1 11 00:21:56.775 05:36:59 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:56.775 05:36:59 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:56.775 05:36:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.775 05:36:59 -- common/autotest_common.sh@10 -- # set +x 00:21:56.775 Malloc1 00:21:56.775 05:36:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.775 05:36:59 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:21:56.775 05:36:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.775 05:36:59 -- common/autotest_common.sh@10 -- # set +x 00:21:56.775 05:36:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.775 05:36:59 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:56.775 05:36:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.775 05:36:59 -- common/autotest_common.sh@10 -- # set +x 00:21:56.775 05:36:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.775 05:36:59 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:56.775 05:36:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.775 05:36:59 -- common/autotest_common.sh@10 -- # set +x 00:21:56.775 [2024-12-07 05:36:59.941707] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:56.775 05:36:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.775 05:36:59 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:56.775 05:36:59 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:21:56.775 05:36:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.775 05:36:59 -- common/autotest_common.sh@10 -- # set +x 00:21:56.775 Malloc2 00:21:56.775 05:36:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.775 05:36:59 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:56.775 05:36:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.775 05:36:59 -- common/autotest_common.sh@10 -- # set +x 00:21:56.775 05:36:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.775 05:36:59 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:21:56.775 05:36:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.775 05:36:59 -- common/autotest_common.sh@10 -- # set +x 00:21:56.775 05:36:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.775 05:36:59 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:56.775 05:36:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.775 05:36:59 -- common/autotest_common.sh@10 -- # set +x 00:21:56.775 05:36:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.775 05:36:59 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:56.775 05:36:59 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:21:56.775 05:36:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.775 05:36:59 -- common/autotest_common.sh@10 -- # set +x 00:21:57.036 Malloc3 00:21:57.036 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.036 05:37:00 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:21:57.036 05:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.036 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:21:57.036 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.036 05:37:00 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:21:57.036 05:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.036 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:21:57.036 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.036 05:37:00 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:57.036 05:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.036 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:21:57.036 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.036 05:37:00 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:57.036 05:37:00 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:21:57.036 05:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.036 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:21:57.036 Malloc4 00:21:57.036 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.036 05:37:00 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:21:57.036 05:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.036 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:21:57.036 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.036 05:37:00 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:21:57.036 05:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.036 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:21:57.036 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.037 05:37:00 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:21:57.037 05:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.037 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:21:57.037 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.037 05:37:00 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:57.037 05:37:00 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:21:57.037 05:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.037 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:21:57.037 Malloc5 00:21:57.037 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.037 05:37:00 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:21:57.037 05:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.037 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:21:57.037 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.037 05:37:00 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:21:57.037 05:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.037 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:21:57.037 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.037 05:37:00 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:21:57.037 05:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.037 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:21:57.037 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.037 05:37:00 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:57.037 05:37:00 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:21:57.037 05:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.037 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:21:57.037 Malloc6 00:21:57.037 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.037 05:37:00 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:21:57.037 05:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.037 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:21:57.037 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.037 05:37:00 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:21:57.037 05:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.037 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:21:57.037 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.037 05:37:00 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:21:57.037 05:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.037 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:21:57.037 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.037 05:37:00 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:57.037 05:37:00 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:21:57.037 05:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.037 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:21:57.037 Malloc7 00:21:57.037 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.037 05:37:00 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:21:57.037 05:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.037 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:21:57.037 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.037 05:37:00 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:21:57.037 05:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.037 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:21:57.037 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.037 05:37:00 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:21:57.037 05:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.037 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:21:57.037 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.037 05:37:00 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:57.037 05:37:00 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:21:57.037 05:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.037 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:21:57.298 Malloc8 00:21:57.298 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.298 05:37:00 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:21:57.298 05:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.298 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:21:57.298 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.298 05:37:00 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:21:57.298 05:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.298 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:21:57.298 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.298 05:37:00 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:21:57.298 05:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.298 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:21:57.298 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.298 05:37:00 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:57.298 05:37:00 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:21:57.298 05:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.298 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:21:57.298 Malloc9 00:21:57.298 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.298 05:37:00 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:21:57.298 05:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.298 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:21:57.298 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.298 05:37:00 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:21:57.298 05:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.298 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:21:57.298 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.298 05:37:00 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:21:57.298 05:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.298 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:21:57.298 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.298 05:37:00 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:57.298 05:37:00 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:21:57.298 05:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.298 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:21:57.298 Malloc10 00:21:57.298 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.298 05:37:00 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:21:57.298 05:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.298 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:21:57.298 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.298 05:37:00 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:21:57.298 05:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.298 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:21:57.298 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.298 05:37:00 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:21:57.298 05:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.298 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:21:57.298 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.298 05:37:00 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:57.298 05:37:00 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:21:57.298 05:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.298 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:21:57.298 Malloc11 00:21:57.298 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.299 05:37:00 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:21:57.299 05:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.299 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:21:57.299 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.299 05:37:00 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:21:57.299 05:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.299 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:21:57.299 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.299 05:37:00 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:21:57.299 05:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.299 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:21:57.299 05:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.299 05:37:00 -- target/multiconnection.sh@28 -- # seq 1 11 00:21:57.299 05:37:00 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:57.299 05:37:00 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:59.212 05:37:02 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:21:59.212 05:37:02 -- common/autotest_common.sh@1187 -- # local i=0 00:21:59.212 05:37:02 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:59.212 05:37:02 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:59.212 05:37:02 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:01.124 05:37:04 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:01.124 05:37:04 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:01.124 05:37:04 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:22:01.124 05:37:04 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:01.124 05:37:04 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:01.124 05:37:04 -- common/autotest_common.sh@1197 -- # return 0 00:22:01.124 05:37:04 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:01.124 05:37:04 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:22:02.509 05:37:05 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:22:02.509 05:37:05 -- common/autotest_common.sh@1187 -- # local i=0 00:22:02.509 05:37:05 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:02.509 05:37:05 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:02.509 05:37:05 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:04.433 05:37:07 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:04.433 05:37:07 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:04.433 05:37:07 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:22:04.433 05:37:07 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:04.433 05:37:07 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:04.433 05:37:07 -- common/autotest_common.sh@1197 -- # return 0 00:22:04.433 05:37:07 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:04.433 05:37:07 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:22:06.347 05:37:09 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:22:06.347 05:37:09 -- common/autotest_common.sh@1187 -- # local i=0 00:22:06.347 05:37:09 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:06.347 05:37:09 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:06.347 05:37:09 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:08.259 05:37:11 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:08.259 05:37:11 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:08.259 05:37:11 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:22:08.259 05:37:11 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:08.259 05:37:11 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:08.259 05:37:11 -- common/autotest_common.sh@1197 -- # return 0 00:22:08.259 05:37:11 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:08.259 05:37:11 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:22:09.642 05:37:12 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:22:09.642 05:37:12 -- common/autotest_common.sh@1187 -- # local i=0 00:22:09.642 05:37:12 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:09.642 05:37:12 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:09.642 05:37:12 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:11.549 05:37:14 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:11.549 05:37:14 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:11.549 05:37:14 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:22:11.549 05:37:14 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:11.549 05:37:14 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:11.549 05:37:14 -- common/autotest_common.sh@1197 -- # return 0 00:22:11.549 05:37:14 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:11.549 05:37:14 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:22:13.459 05:37:16 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:22:13.459 05:37:16 -- common/autotest_common.sh@1187 -- # local i=0 00:22:13.459 05:37:16 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:13.459 05:37:16 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:13.459 05:37:16 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:15.367 05:37:18 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:15.367 05:37:18 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:15.367 05:37:18 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:22:15.367 05:37:18 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:15.367 05:37:18 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:15.367 05:37:18 -- common/autotest_common.sh@1197 -- # return 0 00:22:15.367 05:37:18 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:15.367 05:37:18 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:22:17.281 05:37:20 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:22:17.281 05:37:20 -- common/autotest_common.sh@1187 -- # local i=0 00:22:17.281 05:37:20 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:17.281 05:37:20 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:17.281 05:37:20 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:19.199 05:37:22 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:19.199 05:37:22 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:19.199 05:37:22 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:22:19.199 05:37:22 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:19.199 05:37:22 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:19.199 05:37:22 -- common/autotest_common.sh@1197 -- # return 0 00:22:19.199 05:37:22 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:19.199 05:37:22 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:22:21.116 05:37:24 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:22:21.116 05:37:24 -- common/autotest_common.sh@1187 -- # local i=0 00:22:21.116 05:37:24 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:21.116 05:37:24 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:21.116 05:37:24 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:23.027 05:37:26 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:23.027 05:37:26 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:23.027 05:37:26 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:22:23.027 05:37:26 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:23.027 05:37:26 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:23.027 05:37:26 -- common/autotest_common.sh@1197 -- # return 0 00:22:23.027 05:37:26 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:23.027 05:37:26 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:22:24.940 05:37:27 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:22:24.940 05:37:27 -- common/autotest_common.sh@1187 -- # local i=0 00:22:24.940 05:37:27 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:24.940 05:37:27 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:24.940 05:37:27 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:26.852 05:37:29 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:26.852 05:37:29 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:26.852 05:37:29 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:22:26.852 05:37:29 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:26.852 05:37:29 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:26.852 05:37:29 -- common/autotest_common.sh@1197 -- # return 0 00:22:26.852 05:37:29 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:26.852 05:37:29 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:22:28.762 05:37:31 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:22:28.762 05:37:31 -- common/autotest_common.sh@1187 -- # local i=0 00:22:28.762 05:37:31 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:28.762 05:37:31 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:28.762 05:37:31 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:30.672 05:37:33 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:30.672 05:37:33 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:30.672 05:37:33 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:22:30.672 05:37:33 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:30.672 05:37:33 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:30.672 05:37:33 -- common/autotest_common.sh@1197 -- # return 0 00:22:30.672 05:37:33 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:30.672 05:37:33 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:22:32.581 05:37:35 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:22:32.581 05:37:35 -- common/autotest_common.sh@1187 -- # local i=0 00:22:32.581 05:37:35 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:32.581 05:37:35 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:32.581 05:37:35 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:34.488 05:37:37 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:34.488 05:37:37 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:34.488 05:37:37 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:22:34.488 05:37:37 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:34.488 05:37:37 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:34.488 05:37:37 -- common/autotest_common.sh@1197 -- # return 0 00:22:34.488 05:37:37 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:34.488 05:37:37 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:22:36.396 05:37:39 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:22:36.396 05:37:39 -- common/autotest_common.sh@1187 -- # local i=0 00:22:36.396 05:37:39 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:36.396 05:37:39 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:36.396 05:37:39 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:38.309 05:37:41 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:38.309 05:37:41 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:38.309 05:37:41 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:22:38.309 05:37:41 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:38.309 05:37:41 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:38.309 05:37:41 -- common/autotest_common.sh@1197 -- # return 0 00:22:38.309 05:37:41 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:22:38.309 [global] 00:22:38.309 thread=1 00:22:38.309 invalidate=1 00:22:38.309 rw=read 00:22:38.309 time_based=1 00:22:38.309 runtime=10 00:22:38.309 ioengine=libaio 00:22:38.309 direct=1 00:22:38.309 bs=262144 00:22:38.309 iodepth=64 00:22:38.309 norandommap=1 00:22:38.309 numjobs=1 00:22:38.309 00:22:38.309 [job0] 00:22:38.309 filename=/dev/nvme0n1 00:22:38.309 [job1] 00:22:38.309 filename=/dev/nvme10n1 00:22:38.309 [job2] 00:22:38.309 filename=/dev/nvme1n1 00:22:38.309 [job3] 00:22:38.309 filename=/dev/nvme2n1 00:22:38.309 [job4] 00:22:38.309 filename=/dev/nvme3n1 00:22:38.309 [job5] 00:22:38.309 filename=/dev/nvme4n1 00:22:38.309 [job6] 00:22:38.309 filename=/dev/nvme5n1 00:22:38.309 [job7] 00:22:38.309 filename=/dev/nvme6n1 00:22:38.309 [job8] 00:22:38.309 filename=/dev/nvme7n1 00:22:38.309 [job9] 00:22:38.309 filename=/dev/nvme8n1 00:22:38.309 [job10] 00:22:38.309 filename=/dev/nvme9n1 00:22:38.309 Could not set queue depth (nvme0n1) 00:22:38.309 Could not set queue depth (nvme10n1) 00:22:38.309 Could not set queue depth (nvme1n1) 00:22:38.309 Could not set queue depth (nvme2n1) 00:22:38.309 Could not set queue depth (nvme3n1) 00:22:38.309 Could not set queue depth (nvme4n1) 00:22:38.309 Could not set queue depth (nvme5n1) 00:22:38.309 Could not set queue depth (nvme6n1) 00:22:38.309 Could not set queue depth (nvme7n1) 00:22:38.309 Could not set queue depth (nvme8n1) 00:22:38.309 Could not set queue depth (nvme9n1) 00:22:38.572 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:38.572 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:38.572 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:38.572 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:38.572 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:38.572 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:38.572 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:38.572 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:38.572 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:38.572 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:38.572 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:38.572 fio-3.35 00:22:38.572 Starting 11 threads 00:22:50.913 00:22:50.913 job0: (groupid=0, jobs=1): err= 0: pid=1889754: Sat Dec 7 05:37:52 2024 00:22:50.913 read: IOPS=1166, BW=292MiB/s (306MB/s)(2934MiB/10059msec) 00:22:50.913 slat (usec): min=6, max=56131, avg=660.30, stdev=1998.43 00:22:50.913 clat (msec): min=3, max=152, avg=54.16, stdev=20.72 00:22:50.913 lat (msec): min=4, max=157, avg=54.82, stdev=21.02 00:22:50.913 clat percentiles (msec): 00:22:50.913 | 1.00th=[ 11], 5.00th=[ 20], 10.00th=[ 30], 20.00th=[ 43], 00:22:50.913 | 30.00th=[ 47], 40.00th=[ 50], 50.00th=[ 52], 60.00th=[ 54], 00:22:50.913 | 70.00th=[ 58], 80.00th=[ 71], 90.00th=[ 79], 95.00th=[ 86], 00:22:50.913 | 99.00th=[ 127], 99.50th=[ 130], 99.90th=[ 146], 99.95th=[ 153], 00:22:50.913 | 99.99th=[ 153] 00:22:50.913 bw ( KiB/s): min=144384, max=415232, per=11.17%, avg=298761.30, stdev=75770.03, samples=20 00:22:50.913 iops : min= 564, max= 1622, avg=1167.00, stdev=296.00, samples=20 00:22:50.913 lat (msec) : 4=0.01%, 10=0.88%, 20=4.13%, 50=36.41%, 100=55.45% 00:22:50.913 lat (msec) : 250=3.13% 00:22:50.913 cpu : usr=0.45%, sys=3.56%, ctx=3214, majf=0, minf=4097 00:22:50.913 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:22:50.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.913 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:50.913 issued rwts: total=11734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.913 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:50.913 job1: (groupid=0, jobs=1): err= 0: pid=1889766: Sat Dec 7 05:37:52 2024 00:22:50.913 read: IOPS=791, BW=198MiB/s (208MB/s)(1987MiB/10042msec) 00:22:50.913 slat (usec): min=6, max=49254, avg=1250.30, stdev=3448.78 00:22:50.913 clat (msec): min=5, max=194, avg=79.48, stdev=31.95 00:22:50.913 lat (msec): min=5, max=194, avg=80.73, stdev=32.48 00:22:50.913 clat percentiles (msec): 00:22:50.913 | 1.00th=[ 20], 5.00th=[ 30], 10.00th=[ 39], 20.00th=[ 50], 00:22:50.913 | 30.00th=[ 56], 40.00th=[ 68], 50.00th=[ 81], 60.00th=[ 93], 00:22:50.913 | 70.00th=[ 101], 80.00th=[ 109], 90.00th=[ 121], 95.00th=[ 132], 00:22:50.913 | 99.00th=[ 148], 99.50th=[ 153], 99.90th=[ 167], 99.95th=[ 186], 00:22:50.913 | 99.99th=[ 194] 00:22:50.913 bw ( KiB/s): min=112415, max=440320, per=7.55%, avg=201852.10, stdev=86999.19, samples=20 00:22:50.913 iops : min= 439, max= 1720, avg=788.45, stdev=339.85, samples=20 00:22:50.913 lat (msec) : 10=0.57%, 20=0.48%, 50=20.86%, 100=47.24%, 250=30.86% 00:22:50.913 cpu : usr=0.37%, sys=2.72%, ctx=1691, majf=0, minf=4097 00:22:50.913 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:50.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.913 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:50.913 issued rwts: total=7949,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.913 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:50.913 job2: (groupid=0, jobs=1): err= 0: pid=1889779: Sat Dec 7 05:37:52 2024 00:22:50.913 read: IOPS=1028, BW=257MiB/s (270MB/s)(2582MiB/10038msec) 00:22:50.913 slat (usec): min=6, max=83810, avg=816.02, stdev=2786.69 00:22:50.913 clat (usec): min=1875, max=199350, avg=61326.03, stdev=27299.92 00:22:50.913 lat (usec): min=1923, max=222277, avg=62142.05, stdev=27689.04 00:22:50.913 clat percentiles (msec): 00:22:50.913 | 1.00th=[ 7], 5.00th=[ 22], 10.00th=[ 33], 20.00th=[ 44], 00:22:50.913 | 30.00th=[ 50], 40.00th=[ 54], 50.00th=[ 58], 60.00th=[ 62], 00:22:50.913 | 70.00th=[ 66], 80.00th=[ 75], 90.00th=[ 103], 95.00th=[ 123], 00:22:50.913 | 99.00th=[ 146], 99.50th=[ 148], 99.90th=[ 155], 99.95th=[ 157], 00:22:50.913 | 99.99th=[ 201] 00:22:50.913 bw ( KiB/s): min=147968, max=363008, per=9.82%, avg=262776.60, stdev=60336.80, samples=20 00:22:50.913 iops : min= 578, max= 1418, avg=1026.40, stdev=235.72, samples=20 00:22:50.913 lat (msec) : 2=0.01%, 4=0.30%, 10=1.30%, 20=2.80%, 50=27.59% 00:22:50.913 lat (msec) : 100=57.44%, 250=10.56% 00:22:50.913 cpu : usr=0.40%, sys=3.33%, ctx=2614, majf=0, minf=4097 00:22:50.913 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:22:50.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.913 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:50.913 issued rwts: total=10327,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.913 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:50.913 job3: (groupid=0, jobs=1): err= 0: pid=1889785: Sat Dec 7 05:37:52 2024 00:22:50.913 read: IOPS=834, BW=209MiB/s (219MB/s)(2095MiB/10040msec) 00:22:50.913 slat (usec): min=6, max=104038, avg=965.53, stdev=3753.97 00:22:50.913 clat (msec): min=3, max=224, avg=75.60, stdev=29.44 00:22:50.913 lat (msec): min=3, max=240, avg=76.57, stdev=29.97 00:22:50.913 clat percentiles (msec): 00:22:50.913 | 1.00th=[ 13], 5.00th=[ 34], 10.00th=[ 42], 20.00th=[ 53], 00:22:50.913 | 30.00th=[ 59], 40.00th=[ 64], 50.00th=[ 69], 60.00th=[ 79], 00:22:50.913 | 70.00th=[ 90], 80.00th=[ 105], 90.00th=[ 117], 95.00th=[ 129], 00:22:50.913 | 99.00th=[ 146], 99.50th=[ 150], 99.90th=[ 176], 99.95th=[ 192], 00:22:50.913 | 99.99th=[ 226] 00:22:50.913 bw ( KiB/s): min=134925, max=353596, per=7.96%, avg=212893.45, stdev=52506.92, samples=20 00:22:50.913 iops : min= 527, max= 1381, avg=831.60, stdev=205.08, samples=20 00:22:50.913 lat (msec) : 4=0.04%, 10=0.66%, 20=1.31%, 50=15.11%, 100=59.77% 00:22:50.913 lat (msec) : 250=23.12% 00:22:50.913 cpu : usr=0.37%, sys=2.43%, ctx=2150, majf=0, minf=4097 00:22:50.913 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:50.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.913 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:50.913 issued rwts: total=8381,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.913 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:50.913 job4: (groupid=0, jobs=1): err= 0: pid=1889791: Sat Dec 7 05:37:52 2024 00:22:50.913 read: IOPS=679, BW=170MiB/s (178MB/s)(1707MiB/10042msec) 00:22:50.913 slat (usec): min=5, max=43271, avg=1232.93, stdev=3396.58 00:22:50.913 clat (msec): min=7, max=165, avg=92.81, stdev=24.75 00:22:50.913 lat (msec): min=7, max=165, avg=94.04, stdev=25.03 00:22:50.913 clat percentiles (msec): 00:22:50.913 | 1.00th=[ 20], 5.00th=[ 47], 10.00th=[ 62], 20.00th=[ 74], 00:22:50.913 | 30.00th=[ 84], 40.00th=[ 89], 50.00th=[ 94], 60.00th=[ 101], 00:22:50.913 | 70.00th=[ 106], 80.00th=[ 112], 90.00th=[ 123], 95.00th=[ 131], 00:22:50.913 | 99.00th=[ 148], 99.50th=[ 150], 99.90th=[ 159], 99.95th=[ 163], 00:22:50.913 | 99.99th=[ 167] 00:22:50.913 bw ( KiB/s): min=122634, max=252928, per=6.47%, avg=173164.20, stdev=36194.95, samples=20 00:22:50.913 iops : min= 479, max= 988, avg=676.40, stdev=141.38, samples=20 00:22:50.913 lat (msec) : 10=0.16%, 20=0.89%, 50=5.04%, 100=53.24%, 250=40.66% 00:22:50.913 cpu : usr=0.35%, sys=2.35%, ctx=1808, majf=0, minf=4097 00:22:50.913 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:50.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.913 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:50.913 issued rwts: total=6827,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.913 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:50.913 job5: (groupid=0, jobs=1): err= 0: pid=1889806: Sat Dec 7 05:37:52 2024 00:22:50.913 read: IOPS=1065, BW=266MiB/s (279MB/s)(2681MiB/10069msec) 00:22:50.913 slat (usec): min=7, max=40169, avg=891.67, stdev=2178.19 00:22:50.913 clat (msec): min=6, max=139, avg=59.12, stdev=17.80 00:22:50.913 lat (msec): min=6, max=153, avg=60.01, stdev=18.06 00:22:50.913 clat percentiles (msec): 00:22:50.913 | 1.00th=[ 23], 5.00th=[ 34], 10.00th=[ 40], 20.00th=[ 47], 00:22:50.913 | 30.00th=[ 51], 40.00th=[ 54], 50.00th=[ 56], 60.00th=[ 59], 00:22:50.913 | 70.00th=[ 66], 80.00th=[ 74], 90.00th=[ 80], 95.00th=[ 86], 00:22:50.913 | 99.00th=[ 126], 99.50th=[ 129], 99.90th=[ 136], 99.95th=[ 138], 00:22:50.913 | 99.99th=[ 140] 00:22:50.913 bw ( KiB/s): min=178176, max=352574, per=10.20%, avg=272865.60, stdev=49186.88, samples=20 00:22:50.913 iops : min= 696, max= 1377, avg=1065.85, stdev=192.13, samples=20 00:22:50.913 lat (msec) : 10=0.21%, 20=0.32%, 50=26.54%, 100=70.22%, 250=2.72% 00:22:50.913 cpu : usr=0.48%, sys=4.28%, ctx=2497, majf=0, minf=4097 00:22:50.913 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:22:50.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.913 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:50.913 issued rwts: total=10724,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.913 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:50.913 job6: (groupid=0, jobs=1): err= 0: pid=1889809: Sat Dec 7 05:37:52 2024 00:22:50.913 read: IOPS=1158, BW=290MiB/s (304MB/s)(2911MiB/10047msec) 00:22:50.913 slat (usec): min=6, max=90691, avg=761.07, stdev=2610.01 00:22:50.913 clat (msec): min=3, max=224, avg=54.42, stdev=24.99 00:22:50.913 lat (msec): min=3, max=224, avg=55.18, stdev=25.37 00:22:50.913 clat percentiles (msec): 00:22:50.913 | 1.00th=[ 11], 5.00th=[ 27], 10.00th=[ 34], 20.00th=[ 37], 00:22:50.913 | 30.00th=[ 40], 40.00th=[ 42], 50.00th=[ 46], 60.00th=[ 52], 00:22:50.913 | 70.00th=[ 60], 80.00th=[ 73], 90.00th=[ 96], 95.00th=[ 108], 00:22:50.913 | 99.00th=[ 126], 99.50th=[ 133], 99.90th=[ 138], 99.95th=[ 142], 00:22:50.913 | 99.99th=[ 224] 00:22:50.913 bw ( KiB/s): min=158720, max=425984, per=11.08%, avg=296381.85, stdev=83279.77, samples=20 00:22:50.913 iops : min= 620, max= 1664, avg=1157.70, stdev=325.25, samples=20 00:22:50.913 lat (msec) : 4=0.03%, 10=0.90%, 20=2.04%, 50=55.28%, 100=33.40% 00:22:50.913 lat (msec) : 250=8.35% 00:22:50.913 cpu : usr=0.41%, sys=3.45%, ctx=2689, majf=0, minf=4097 00:22:50.913 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:22:50.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.913 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:50.913 issued rwts: total=11642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.914 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:50.914 job7: (groupid=0, jobs=1): err= 0: pid=1889817: Sat Dec 7 05:37:52 2024 00:22:50.914 read: IOPS=680, BW=170MiB/s (178MB/s)(1711MiB/10066msec) 00:22:50.914 slat (usec): min=7, max=53894, avg=1297.60, stdev=3682.32 00:22:50.914 clat (msec): min=12, max=190, avg=92.68, stdev=23.92 00:22:50.914 lat (msec): min=12, max=190, avg=93.98, stdev=24.34 00:22:50.914 clat percentiles (msec): 00:22:50.914 | 1.00th=[ 32], 5.00th=[ 55], 10.00th=[ 65], 20.00th=[ 75], 00:22:50.914 | 30.00th=[ 80], 40.00th=[ 86], 50.00th=[ 92], 60.00th=[ 99], 00:22:50.914 | 70.00th=[ 104], 80.00th=[ 112], 90.00th=[ 125], 95.00th=[ 133], 00:22:50.914 | 99.00th=[ 150], 99.50th=[ 155], 99.90th=[ 167], 99.95th=[ 174], 00:22:50.914 | 99.99th=[ 190] 00:22:50.914 bw ( KiB/s): min=119568, max=239648, per=6.49%, avg=173583.20, stdev=35807.99, samples=20 00:22:50.914 iops : min= 467, max= 936, avg=678.05, stdev=139.87, samples=20 00:22:50.914 lat (msec) : 20=0.50%, 50=3.35%, 100=59.59%, 250=36.57% 00:22:50.914 cpu : usr=0.21%, sys=2.17%, ctx=1735, majf=0, minf=4097 00:22:50.914 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:50.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.914 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:50.914 issued rwts: total=6845,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.914 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:50.914 job8: (groupid=0, jobs=1): err= 0: pid=1889847: Sat Dec 7 05:37:52 2024 00:22:50.914 read: IOPS=731, BW=183MiB/s (192MB/s)(1839MiB/10054msec) 00:22:50.914 slat (usec): min=6, max=82868, avg=1123.63, stdev=4037.31 00:22:50.914 clat (msec): min=5, max=231, avg=86.24, stdev=31.54 00:22:50.914 lat (msec): min=5, max=231, avg=87.37, stdev=32.20 00:22:50.914 clat percentiles (msec): 00:22:50.914 | 1.00th=[ 14], 5.00th=[ 29], 10.00th=[ 43], 20.00th=[ 58], 00:22:50.914 | 30.00th=[ 71], 40.00th=[ 80], 50.00th=[ 88], 60.00th=[ 99], 00:22:50.914 | 70.00th=[ 106], 80.00th=[ 114], 90.00th=[ 127], 95.00th=[ 133], 00:22:50.914 | 99.00th=[ 146], 99.50th=[ 150], 99.90th=[ 155], 99.95th=[ 190], 00:22:50.914 | 99.99th=[ 232] 00:22:50.914 bw ( KiB/s): min=107816, max=323072, per=6.98%, avg=186690.00, stdev=53999.60, samples=20 00:22:50.914 iops : min= 421, max= 1262, avg=729.25, stdev=210.95, samples=20 00:22:50.914 lat (msec) : 10=0.27%, 20=2.34%, 50=11.87%, 100=47.08%, 250=38.44% 00:22:50.914 cpu : usr=0.36%, sys=2.29%, ctx=1957, majf=0, minf=4097 00:22:50.914 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:22:50.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.914 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:50.914 issued rwts: total=7357,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.914 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:50.914 job9: (groupid=0, jobs=1): err= 0: pid=1889860: Sat Dec 7 05:37:52 2024 00:22:50.914 read: IOPS=1394, BW=349MiB/s (366MB/s)(3505MiB/10052msec) 00:22:50.914 slat (usec): min=5, max=54183, avg=706.73, stdev=1969.05 00:22:50.914 clat (msec): min=2, max=161, avg=45.12, stdev=23.98 00:22:50.914 lat (msec): min=2, max=182, avg=45.83, stdev=24.34 00:22:50.914 clat percentiles (msec): 00:22:50.914 | 1.00th=[ 23], 5.00th=[ 25], 10.00th=[ 26], 20.00th=[ 28], 00:22:50.914 | 30.00th=[ 29], 40.00th=[ 31], 50.00th=[ 35], 60.00th=[ 44], 00:22:50.914 | 70.00th=[ 51], 80.00th=[ 59], 90.00th=[ 80], 95.00th=[ 104], 00:22:50.914 | 99.00th=[ 122], 99.50th=[ 127], 99.90th=[ 144], 99.95th=[ 150], 00:22:50.914 | 99.99th=[ 155] 00:22:50.914 bw ( KiB/s): min=129282, max=594944, per=13.35%, avg=357245.90, stdev=143420.68, samples=20 00:22:50.914 iops : min= 505, max= 2324, avg=1395.40, stdev=560.26, samples=20 00:22:50.914 lat (msec) : 4=0.04%, 10=0.15%, 20=0.29%, 50=69.21%, 100=24.25% 00:22:50.914 lat (msec) : 250=6.06% 00:22:50.914 cpu : usr=0.41%, sys=3.81%, ctx=2810, majf=0, minf=4097 00:22:50.914 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:50.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.914 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:50.914 issued rwts: total=14020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.914 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:50.914 job10: (groupid=0, jobs=1): err= 0: pid=1889871: Sat Dec 7 05:37:52 2024 00:22:50.914 read: IOPS=937, BW=234MiB/s (246MB/s)(2354MiB/10045msec) 00:22:50.914 slat (usec): min=6, max=70743, avg=1048.04, stdev=2676.09 00:22:50.914 clat (msec): min=5, max=184, avg=67.13, stdev=19.07 00:22:50.914 lat (msec): min=5, max=184, avg=68.18, stdev=19.31 00:22:50.914 clat percentiles (msec): 00:22:50.914 | 1.00th=[ 20], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 52], 00:22:50.914 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 70], 00:22:50.914 | 70.00th=[ 74], 80.00th=[ 80], 90.00th=[ 89], 95.00th=[ 106], 00:22:50.914 | 99.00th=[ 127], 99.50th=[ 136], 99.90th=[ 148], 99.95th=[ 148], 00:22:50.914 | 99.99th=[ 184] 00:22:50.914 bw ( KiB/s): min=156672, max=310272, per=8.95%, avg=239382.20, stdev=42512.61, samples=20 00:22:50.914 iops : min= 612, max= 1212, avg=935.05, stdev=166.02, samples=20 00:22:50.914 lat (msec) : 10=0.12%, 20=0.92%, 50=14.08%, 100=78.82%, 250=6.05% 00:22:50.914 cpu : usr=0.42%, sys=3.30%, ctx=1927, majf=0, minf=3534 00:22:50.914 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:22:50.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.914 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:50.914 issued rwts: total=9416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.914 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:50.914 00:22:50.914 Run status group 0 (all jobs): 00:22:50.914 READ: bw=2613MiB/s (2739MB/s), 170MiB/s-349MiB/s (178MB/s-366MB/s), io=25.7GiB (27.6GB), run=10038-10069msec 00:22:50.914 00:22:50.914 Disk stats (read/write): 00:22:50.914 nvme0n1: ios=23149/0, merge=0/0, ticks=1219059/0, in_queue=1219059, util=96.31% 00:22:50.914 nvme10n1: ios=15484/0, merge=0/0, ticks=1214061/0, in_queue=1214061, util=96.64% 00:22:50.914 nvme1n1: ios=20200/0, merge=0/0, ticks=1220857/0, in_queue=1220857, util=96.99% 00:22:50.914 nvme2n1: ios=16352/0, merge=0/0, ticks=1218806/0, in_queue=1218806, util=97.23% 00:22:50.914 nvme3n1: ios=13217/0, merge=0/0, ticks=1220630/0, in_queue=1220630, util=97.35% 00:22:50.914 nvme4n1: ios=20990/0, merge=0/0, ticks=1211994/0, in_queue=1211994, util=97.80% 00:22:50.914 nvme5n1: ios=22769/0, merge=0/0, ticks=1218231/0, in_queue=1218231, util=97.99% 00:22:50.914 nvme6n1: ios=13380/0, merge=0/0, ticks=1212218/0, in_queue=1212218, util=98.24% 00:22:50.914 nvme7n1: ios=14231/0, merge=0/0, ticks=1214986/0, in_queue=1214986, util=98.77% 00:22:50.914 nvme8n1: ios=27547/0, merge=0/0, ticks=1220045/0, in_queue=1220045, util=99.05% 00:22:50.914 nvme9n1: ios=18447/0, merge=0/0, ticks=1218326/0, in_queue=1218326, util=99.26% 00:22:50.914 05:37:52 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:22:50.914 [global] 00:22:50.914 thread=1 00:22:50.914 invalidate=1 00:22:50.914 rw=randwrite 00:22:50.914 time_based=1 00:22:50.914 runtime=10 00:22:50.914 ioengine=libaio 00:22:50.914 direct=1 00:22:50.914 bs=262144 00:22:50.914 iodepth=64 00:22:50.914 norandommap=1 00:22:50.914 numjobs=1 00:22:50.914 00:22:50.914 [job0] 00:22:50.914 filename=/dev/nvme0n1 00:22:50.914 [job1] 00:22:50.914 filename=/dev/nvme10n1 00:22:50.914 [job2] 00:22:50.914 filename=/dev/nvme1n1 00:22:50.914 [job3] 00:22:50.914 filename=/dev/nvme2n1 00:22:50.914 [job4] 00:22:50.914 filename=/dev/nvme3n1 00:22:50.914 [job5] 00:22:50.914 filename=/dev/nvme4n1 00:22:50.914 [job6] 00:22:50.914 filename=/dev/nvme5n1 00:22:50.914 [job7] 00:22:50.914 filename=/dev/nvme6n1 00:22:50.914 [job8] 00:22:50.914 filename=/dev/nvme7n1 00:22:50.914 [job9] 00:22:50.914 filename=/dev/nvme8n1 00:22:50.914 [job10] 00:22:50.914 filename=/dev/nvme9n1 00:22:50.914 Could not set queue depth (nvme0n1) 00:22:50.914 Could not set queue depth (nvme10n1) 00:22:50.914 Could not set queue depth (nvme1n1) 00:22:50.914 Could not set queue depth (nvme2n1) 00:22:50.914 Could not set queue depth (nvme3n1) 00:22:50.914 Could not set queue depth (nvme4n1) 00:22:50.914 Could not set queue depth (nvme5n1) 00:22:50.914 Could not set queue depth (nvme6n1) 00:22:50.914 Could not set queue depth (nvme7n1) 00:22:50.914 Could not set queue depth (nvme8n1) 00:22:50.914 Could not set queue depth (nvme9n1) 00:22:50.914 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:50.914 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:50.914 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:50.914 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:50.914 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:50.914 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:50.914 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:50.914 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:50.914 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:50.914 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:50.914 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:50.914 fio-3.35 00:22:50.914 Starting 11 threads 00:23:00.930 00:23:00.930 job0: (groupid=0, jobs=1): err= 0: pid=1892091: Sat Dec 7 05:38:03 2024 00:23:00.930 write: IOPS=737, BW=184MiB/s (193MB/s)(1857MiB/10074msec); 0 zone resets 00:23:00.930 slat (usec): min=29, max=125262, avg=1312.33, stdev=2978.15 00:23:00.930 clat (msec): min=28, max=242, avg=85.42, stdev=24.23 00:23:00.930 lat (msec): min=28, max=242, avg=86.74, stdev=24.46 00:23:00.930 clat percentiles (msec): 00:23:00.930 | 1.00th=[ 55], 5.00th=[ 58], 10.00th=[ 59], 20.00th=[ 63], 00:23:00.930 | 30.00th=[ 64], 40.00th=[ 68], 50.00th=[ 87], 60.00th=[ 100], 00:23:00.930 | 70.00th=[ 102], 80.00th=[ 105], 90.00th=[ 108], 95.00th=[ 116], 00:23:00.930 | 99.00th=[ 163], 99.50th=[ 192], 99.90th=[ 234], 99.95th=[ 239], 00:23:00.930 | 99.99th=[ 243] 00:23:00.930 bw ( KiB/s): min=117760, max=274944, per=9.88%, avg=188569.60, stdev=48952.32, samples=20 00:23:00.930 iops : min= 460, max= 1074, avg=736.60, stdev=191.22, samples=20 00:23:00.930 lat (msec) : 50=0.55%, 100=61.93%, 250=37.52% 00:23:00.930 cpu : usr=1.60%, sys=2.59%, ctx=1904, majf=0, minf=1 00:23:00.930 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:00.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.930 issued rwts: total=0,7429,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.930 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.930 job1: (groupid=0, jobs=1): err= 0: pid=1892116: Sat Dec 7 05:38:03 2024 00:23:00.930 write: IOPS=742, BW=186MiB/s (195MB/s)(1869MiB/10073msec); 0 zone resets 00:23:00.930 slat (usec): min=37, max=38956, avg=1315.97, stdev=2376.92 00:23:00.930 clat (msec): min=13, max=157, avg=84.88, stdev=19.97 00:23:00.930 lat (msec): min=13, max=158, avg=86.20, stdev=20.20 00:23:00.930 clat percentiles (msec): 00:23:00.930 | 1.00th=[ 58], 5.00th=[ 61], 10.00th=[ 62], 20.00th=[ 64], 00:23:00.930 | 30.00th=[ 65], 40.00th=[ 69], 50.00th=[ 91], 60.00th=[ 99], 00:23:00.930 | 70.00th=[ 102], 80.00th=[ 104], 90.00th=[ 107], 95.00th=[ 111], 00:23:00.930 | 99.00th=[ 127], 99.50th=[ 134], 99.90th=[ 146], 99.95th=[ 153], 00:23:00.930 | 99.99th=[ 159] 00:23:00.930 bw ( KiB/s): min=147968, max=259584, per=9.94%, avg=189747.20, stdev=43431.75, samples=20 00:23:00.930 iops : min= 578, max= 1014, avg=741.20, stdev=169.66, samples=20 00:23:00.930 lat (msec) : 20=0.11%, 50=0.29%, 100=64.67%, 250=34.93% 00:23:00.930 cpu : usr=2.41%, sys=2.90%, ctx=1891, majf=0, minf=1 00:23:00.930 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:00.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.930 issued rwts: total=0,7475,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.930 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.930 job2: (groupid=0, jobs=1): err= 0: pid=1892136: Sat Dec 7 05:38:03 2024 00:23:00.930 write: IOPS=882, BW=221MiB/s (231MB/s)(2238MiB/10143msec); 0 zone resets 00:23:00.930 slat (usec): min=19, max=73861, avg=1035.66, stdev=2586.11 00:23:00.930 clat (msec): min=2, max=314, avg=71.47, stdev=44.50 00:23:00.930 lat (msec): min=2, max=314, avg=72.50, stdev=45.10 00:23:00.930 clat percentiles (msec): 00:23:00.930 | 1.00th=[ 11], 5.00th=[ 34], 10.00th=[ 44], 20.00th=[ 49], 00:23:00.930 | 30.00th=[ 54], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 63], 00:23:00.930 | 70.00th=[ 67], 80.00th=[ 83], 90.00th=[ 100], 95.00th=[ 203], 00:23:00.930 | 99.00th=[ 243], 99.50th=[ 253], 99.90th=[ 300], 99.95th=[ 309], 00:23:00.930 | 99.99th=[ 317] 00:23:00.930 bw ( KiB/s): min=69632, max=347136, per=11.92%, avg=227507.20, stdev=83394.61, samples=20 00:23:00.930 iops : min= 272, max= 1356, avg=888.70, stdev=325.76, samples=20 00:23:00.930 lat (msec) : 4=0.07%, 10=0.89%, 20=1.78%, 50=20.88%, 100=66.55% 00:23:00.930 lat (msec) : 250=9.27%, 500=0.56% 00:23:00.930 cpu : usr=1.98%, sys=2.53%, ctx=2903, majf=0, minf=1 00:23:00.930 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:23:00.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.930 issued rwts: total=0,8950,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.930 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.930 job3: (groupid=0, jobs=1): err= 0: pid=1892148: Sat Dec 7 05:38:03 2024 00:23:00.930 write: IOPS=629, BW=157MiB/s (165MB/s)(1596MiB/10143msec); 0 zone resets 00:23:00.930 slat (usec): min=16, max=54407, avg=1539.65, stdev=3262.45 00:23:00.930 clat (msec): min=8, max=320, avg=100.08, stdev=52.91 00:23:00.930 lat (msec): min=9, max=320, avg=101.62, stdev=53.66 00:23:00.930 clat percentiles (msec): 00:23:00.930 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 51], 20.00th=[ 56], 00:23:00.930 | 30.00th=[ 58], 40.00th=[ 60], 50.00th=[ 64], 60.00th=[ 128], 00:23:00.930 | 70.00th=[ 136], 80.00th=[ 144], 90.00th=[ 180], 95.00th=[ 197], 00:23:00.930 | 99.00th=[ 211], 99.50th=[ 215], 99.90th=[ 271], 99.95th=[ 288], 00:23:00.930 | 99.99th=[ 321] 00:23:00.930 bw ( KiB/s): min=79872, max=364544, per=8.48%, avg=161843.20, stdev=89479.26, samples=20 00:23:00.930 iops : min= 312, max= 1424, avg=632.20, stdev=349.53, samples=20 00:23:00.930 lat (msec) : 10=0.06%, 20=0.19%, 50=9.71%, 100=43.35%, 250=46.47% 00:23:00.930 lat (msec) : 500=0.22% 00:23:00.930 cpu : usr=1.39%, sys=1.88%, ctx=1710, majf=0, minf=2 00:23:00.930 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:23:00.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.930 issued rwts: total=0,6385,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.931 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.931 job4: (groupid=0, jobs=1): err= 0: pid=1892155: Sat Dec 7 05:38:03 2024 00:23:00.931 write: IOPS=521, BW=130MiB/s (137MB/s)(1311MiB/10062msec); 0 zone resets 00:23:00.931 slat (usec): min=20, max=33268, avg=1826.10, stdev=3590.85 00:23:00.931 clat (msec): min=5, max=214, avg=120.91, stdev=41.80 00:23:00.931 lat (msec): min=6, max=214, avg=122.74, stdev=42.37 00:23:00.931 clat percentiles (msec): 00:23:00.931 | 1.00th=[ 19], 5.00th=[ 52], 10.00th=[ 71], 20.00th=[ 88], 00:23:00.931 | 30.00th=[ 101], 40.00th=[ 107], 50.00th=[ 124], 60.00th=[ 131], 00:23:00.931 | 70.00th=[ 138], 80.00th=[ 148], 90.00th=[ 184], 95.00th=[ 197], 00:23:00.931 | 99.00th=[ 211], 99.50th=[ 213], 99.90th=[ 213], 99.95th=[ 215], 00:23:00.931 | 99.99th=[ 215] 00:23:00.931 bw ( KiB/s): min=79872, max=244224, per=6.95%, avg=132659.20, stdev=40180.85, samples=20 00:23:00.931 iops : min= 312, max= 954, avg=518.20, stdev=156.96, samples=20 00:23:00.931 lat (msec) : 10=0.15%, 20=1.05%, 50=3.60%, 100=24.42%, 250=70.77% 00:23:00.931 cpu : usr=1.22%, sys=1.40%, ctx=1597, majf=0, minf=1 00:23:00.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:00.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.931 issued rwts: total=0,5245,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.931 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.931 job5: (groupid=0, jobs=1): err= 0: pid=1892178: Sat Dec 7 05:38:03 2024 00:23:00.931 write: IOPS=529, BW=132MiB/s (139MB/s)(1342MiB/10143msec); 0 zone resets 00:23:00.931 slat (usec): min=18, max=47818, avg=1790.71, stdev=3750.44 00:23:00.931 clat (usec): min=1169, max=322902, avg=118742.23, stdev=49552.32 00:23:00.931 lat (usec): min=1252, max=322941, avg=120532.94, stdev=50215.36 00:23:00.931 clat percentiles (msec): 00:23:00.931 | 1.00th=[ 16], 5.00th=[ 45], 10.00th=[ 55], 20.00th=[ 58], 00:23:00.931 | 30.00th=[ 77], 40.00th=[ 127], 50.00th=[ 134], 60.00th=[ 138], 00:23:00.931 | 70.00th=[ 142], 80.00th=[ 153], 90.00th=[ 184], 95.00th=[ 194], 00:23:00.931 | 99.00th=[ 209], 99.50th=[ 226], 99.90th=[ 313], 99.95th=[ 313], 00:23:00.931 | 99.99th=[ 321] 00:23:00.931 bw ( KiB/s): min=83968, max=294400, per=7.11%, avg=135808.00, stdev=60984.90, samples=20 00:23:00.931 iops : min= 328, max= 1150, avg=530.50, stdev=238.22, samples=20 00:23:00.931 lat (msec) : 2=0.11%, 4=0.20%, 10=0.48%, 20=0.39%, 50=4.69% 00:23:00.931 lat (msec) : 100=26.70%, 250=67.08%, 500=0.34% 00:23:00.931 cpu : usr=1.09%, sys=1.61%, ctx=1534, majf=0, minf=1 00:23:00.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:00.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.931 issued rwts: total=0,5368,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.931 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.931 job6: (groupid=0, jobs=1): err= 0: pid=1892190: Sat Dec 7 05:38:03 2024 00:23:00.931 write: IOPS=446, BW=112MiB/s (117MB/s)(1133MiB/10146msec); 0 zone resets 00:23:00.931 slat (usec): min=33, max=53233, avg=2124.89, stdev=4162.59 00:23:00.931 clat (msec): min=30, max=296, avg=140.68, stdev=33.31 00:23:00.931 lat (msec): min=32, max=296, avg=142.81, stdev=33.62 00:23:00.931 clat percentiles (msec): 00:23:00.931 | 1.00th=[ 50], 5.00th=[ 90], 10.00th=[ 107], 20.00th=[ 124], 00:23:00.931 | 30.00th=[ 130], 40.00th=[ 134], 50.00th=[ 138], 60.00th=[ 142], 00:23:00.931 | 70.00th=[ 146], 80.00th=[ 157], 90.00th=[ 192], 95.00th=[ 203], 00:23:00.931 | 99.00th=[ 220], 99.50th=[ 239], 99.90th=[ 288], 99.95th=[ 288], 00:23:00.931 | 99.99th=[ 296] 00:23:00.931 bw ( KiB/s): min=77824, max=158208, per=5.99%, avg=114355.20, stdev=21730.75, samples=20 00:23:00.931 iops : min= 304, max= 618, avg=446.70, stdev=84.89, samples=20 00:23:00.931 lat (msec) : 50=1.04%, 100=5.91%, 250=92.65%, 500=0.40% 00:23:00.931 cpu : usr=0.92%, sys=1.41%, ctx=1273, majf=0, minf=1 00:23:00.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:23:00.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.931 issued rwts: total=0,4531,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.931 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.931 job7: (groupid=0, jobs=1): err= 0: pid=1892200: Sat Dec 7 05:38:03 2024 00:23:00.931 write: IOPS=851, BW=213MiB/s (223MB/s)(2141MiB/10058msec); 0 zone resets 00:23:00.931 slat (usec): min=23, max=29302, avg=1132.55, stdev=2108.56 00:23:00.931 clat (msec): min=3, max=156, avg=74.02, stdev=23.96 00:23:00.931 lat (msec): min=3, max=157, avg=75.15, stdev=24.29 00:23:00.931 clat percentiles (msec): 00:23:00.931 | 1.00th=[ 19], 5.00th=[ 54], 10.00th=[ 58], 20.00th=[ 59], 00:23:00.931 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 64], 60.00th=[ 68], 00:23:00.931 | 70.00th=[ 81], 80.00th=[ 86], 90.00th=[ 112], 95.00th=[ 128], 00:23:00.931 | 99.00th=[ 140], 99.50th=[ 148], 99.90th=[ 153], 99.95th=[ 155], 00:23:00.931 | 99.99th=[ 157] 00:23:00.931 bw ( KiB/s): min=124928, max=273920, per=11.40%, avg=217600.00, stdev=48168.12, samples=20 00:23:00.931 iops : min= 488, max= 1070, avg=850.00, stdev=188.16, samples=20 00:23:00.931 lat (msec) : 4=0.02%, 10=0.33%, 20=0.84%, 50=2.69%, 100=80.26% 00:23:00.931 lat (msec) : 250=15.86% 00:23:00.931 cpu : usr=2.04%, sys=2.76%, ctx=2332, majf=0, minf=1 00:23:00.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:23:00.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.931 issued rwts: total=0,8563,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.931 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.931 job8: (groupid=0, jobs=1): err= 0: pid=1892233: Sat Dec 7 05:38:03 2024 00:23:00.931 write: IOPS=495, BW=124MiB/s (130MB/s)(1258MiB/10146msec); 0 zone resets 00:23:00.931 slat (usec): min=28, max=39306, avg=1959.72, stdev=3630.22 00:23:00.931 clat (msec): min=15, max=306, avg=127.08, stdev=34.15 00:23:00.931 lat (msec): min=15, max=306, avg=129.04, stdev=34.47 00:23:00.931 clat percentiles (msec): 00:23:00.931 | 1.00th=[ 88], 5.00th=[ 95], 10.00th=[ 97], 20.00th=[ 102], 00:23:00.931 | 30.00th=[ 104], 40.00th=[ 106], 50.00th=[ 108], 60.00th=[ 132], 00:23:00.931 | 70.00th=[ 138], 80.00th=[ 155], 90.00th=[ 186], 95.00th=[ 194], 00:23:00.931 | 99.00th=[ 203], 99.50th=[ 245], 99.90th=[ 296], 99.95th=[ 296], 00:23:00.931 | 99.99th=[ 309] 00:23:00.931 bw ( KiB/s): min=83968, max=163840, per=6.66%, avg=127129.60, stdev=29739.69, samples=20 00:23:00.931 iops : min= 328, max= 640, avg=496.60, stdev=116.17, samples=20 00:23:00.931 lat (msec) : 20=0.16%, 50=0.32%, 100=14.59%, 250=84.49%, 500=0.44% 00:23:00.931 cpu : usr=1.11%, sys=1.37%, ctx=1306, majf=0, minf=1 00:23:00.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:23:00.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.931 issued rwts: total=0,5030,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.931 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.931 job9: (groupid=0, jobs=1): err= 0: pid=1892245: Sat Dec 7 05:38:03 2024 00:23:00.931 write: IOPS=687, BW=172MiB/s (180MB/s)(1733MiB/10077msec); 0 zone resets 00:23:00.931 slat (usec): min=25, max=129205, avg=1303.23, stdev=3795.70 00:23:00.931 clat (usec): min=1720, max=365120, avg=91697.84, stdev=56467.95 00:23:00.931 lat (usec): min=1783, max=370231, avg=93001.07, stdev=57154.08 00:23:00.931 clat percentiles (msec): 00:23:00.931 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 15], 20.00th=[ 54], 00:23:00.931 | 30.00th=[ 61], 40.00th=[ 95], 50.00th=[ 101], 60.00th=[ 102], 00:23:00.931 | 70.00th=[ 104], 80.00th=[ 106], 90.00th=[ 126], 95.00th=[ 230], 00:23:00.931 | 99.00th=[ 284], 99.50th=[ 292], 99.90th=[ 321], 99.95th=[ 351], 00:23:00.931 | 99.99th=[ 368] 00:23:00.931 bw ( KiB/s): min=98816, max=293376, per=9.21%, avg=175820.80, stdev=46736.85, samples=20 00:23:00.931 iops : min= 386, max= 1146, avg=686.80, stdev=182.57, samples=20 00:23:00.931 lat (msec) : 2=0.04%, 4=1.51%, 10=6.56%, 20=3.36%, 50=4.96% 00:23:00.931 lat (msec) : 100=35.29%, 250=44.41%, 500=3.85% 00:23:00.931 cpu : usr=1.65%, sys=2.05%, ctx=2714, majf=0, minf=1 00:23:00.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:23:00.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.931 issued rwts: total=0,6931,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.931 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.931 job10: (groupid=0, jobs=1): err= 0: pid=1892248: Sat Dec 7 05:38:03 2024 00:23:00.931 write: IOPS=970, BW=243MiB/s (254MB/s)(2438MiB/10051msec); 0 zone resets 00:23:00.931 slat (usec): min=17, max=38121, avg=980.87, stdev=1840.48 00:23:00.932 clat (msec): min=2, max=171, avg=64.96, stdev=19.46 00:23:00.932 lat (msec): min=2, max=171, avg=65.95, stdev=19.71 00:23:00.932 clat percentiles (msec): 00:23:00.932 | 1.00th=[ 19], 5.00th=[ 45], 10.00th=[ 49], 20.00th=[ 53], 00:23:00.932 | 30.00th=[ 57], 40.00th=[ 60], 50.00th=[ 62], 60.00th=[ 64], 00:23:00.932 | 70.00th=[ 66], 80.00th=[ 79], 90.00th=[ 85], 95.00th=[ 104], 00:23:00.932 | 99.00th=[ 138], 99.50th=[ 142], 99.90th=[ 161], 99.95th=[ 165], 00:23:00.932 | 99.99th=[ 171] 00:23:00.932 bw ( KiB/s): min=136704, max=336384, per=12.99%, avg=248038.40, stdev=49545.78, samples=20 00:23:00.932 iops : min= 534, max= 1314, avg=968.90, stdev=193.54, samples=20 00:23:00.932 lat (msec) : 4=0.07%, 10=0.41%, 20=0.64%, 50=11.63%, 100=81.68% 00:23:00.932 lat (msec) : 250=5.58% 00:23:00.932 cpu : usr=2.12%, sys=2.59%, ctx=2766, majf=0, minf=1 00:23:00.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:23:00.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.932 issued rwts: total=0,9752,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.932 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.932 00:23:00.932 Run status group 0 (all jobs): 00:23:00.932 WRITE: bw=1864MiB/s (1955MB/s), 112MiB/s-243MiB/s (117MB/s-254MB/s), io=18.5GiB (19.8GB), run=10051-10146msec 00:23:00.932 00:23:00.932 Disk stats (read/write): 00:23:00.932 nvme0n1: ios=47/14479, merge=0/0, ticks=1863/1180762, in_queue=1182625, util=99.95% 00:23:00.932 nvme10n1: ios=47/14578, merge=0/0, ticks=2519/1195966, in_queue=1198485, util=100.00% 00:23:00.932 nvme1n1: ios=0/17844, merge=0/0, ticks=0/1226142, in_queue=1226142, util=96.97% 00:23:00.932 nvme2n1: ios=0/12715, merge=0/0, ticks=0/1223905, in_queue=1223905, util=97.20% 00:23:00.932 nvme3n1: ios=0/10027, merge=0/0, ticks=0/1200198, in_queue=1200198, util=97.23% 00:23:00.932 nvme4n1: ios=47/10681, merge=0/0, ticks=1794/1214925, in_queue=1216719, util=100.00% 00:23:00.932 nvme5n1: ios=45/9001, merge=0/0, ticks=1776/1217738, in_queue=1219514, util=100.00% 00:23:00.932 nvme6n1: ios=0/16656, merge=0/0, ticks=0/1199686, in_queue=1199686, util=98.09% 00:23:00.932 nvme7n1: ios=0/10000, merge=0/0, ticks=0/1224534, in_queue=1224534, util=98.67% 00:23:00.932 nvme8n1: ios=38/13425, merge=0/0, ticks=593/1193920, in_queue=1194513, util=100.00% 00:23:00.932 nvme9n1: ios=0/18930, merge=0/0, ticks=0/1202565, in_queue=1202565, util=99.07% 00:23:00.932 05:38:03 -- target/multiconnection.sh@36 -- # sync 00:23:00.932 05:38:03 -- target/multiconnection.sh@37 -- # seq 1 11 00:23:00.932 05:38:03 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:00.932 05:38:03 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:00.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:00.932 05:38:03 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:23:00.932 05:38:03 -- common/autotest_common.sh@1208 -- # local i=0 00:23:00.932 05:38:03 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:23:00.932 05:38:03 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:23:00.932 05:38:04 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:23:00.932 05:38:04 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:23:00.932 05:38:04 -- common/autotest_common.sh@1220 -- # return 0 00:23:00.932 05:38:04 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:00.932 05:38:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.932 05:38:04 -- common/autotest_common.sh@10 -- # set +x 00:23:00.932 05:38:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.932 05:38:04 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:00.932 05:38:04 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:23:01.193 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:23:01.193 05:38:04 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:23:01.193 05:38:04 -- common/autotest_common.sh@1208 -- # local i=0 00:23:01.193 05:38:04 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:23:01.193 05:38:04 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:23:01.193 05:38:04 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:23:01.193 05:38:04 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:23:01.193 05:38:04 -- common/autotest_common.sh@1220 -- # return 0 00:23:01.193 05:38:04 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:01.193 05:38:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.193 05:38:04 -- common/autotest_common.sh@10 -- # set +x 00:23:01.193 05:38:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.193 05:38:04 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:01.193 05:38:04 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:23:01.454 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:23:01.454 05:38:04 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:23:01.454 05:38:04 -- common/autotest_common.sh@1208 -- # local i=0 00:23:01.454 05:38:04 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:23:01.454 05:38:04 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:23:01.454 05:38:04 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:23:01.454 05:38:04 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:23:01.454 05:38:04 -- common/autotest_common.sh@1220 -- # return 0 00:23:01.454 05:38:04 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:23:01.454 05:38:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.454 05:38:04 -- common/autotest_common.sh@10 -- # set +x 00:23:01.454 05:38:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.454 05:38:04 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:01.454 05:38:04 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:23:01.716 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:23:01.716 05:38:04 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:23:01.716 05:38:04 -- common/autotest_common.sh@1208 -- # local i=0 00:23:01.716 05:38:04 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:23:01.716 05:38:04 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:23:01.716 05:38:04 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:23:01.716 05:38:04 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:23:01.977 05:38:04 -- common/autotest_common.sh@1220 -- # return 0 00:23:01.977 05:38:04 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:23:01.977 05:38:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.977 05:38:04 -- common/autotest_common.sh@10 -- # set +x 00:23:01.977 05:38:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.977 05:38:04 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:01.977 05:38:04 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:23:02.237 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:23:02.237 05:38:05 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:23:02.237 05:38:05 -- common/autotest_common.sh@1208 -- # local i=0 00:23:02.237 05:38:05 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:23:02.237 05:38:05 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:23:02.237 05:38:05 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:23:02.237 05:38:05 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:23:02.237 05:38:05 -- common/autotest_common.sh@1220 -- # return 0 00:23:02.237 05:38:05 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:23:02.237 05:38:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.237 05:38:05 -- common/autotest_common.sh@10 -- # set +x 00:23:02.237 05:38:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.237 05:38:05 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:02.237 05:38:05 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:23:02.237 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:23:02.237 05:38:05 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:23:02.237 05:38:05 -- common/autotest_common.sh@1208 -- # local i=0 00:23:02.237 05:38:05 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:23:02.237 05:38:05 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:23:02.237 05:38:05 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:23:02.237 05:38:05 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:23:02.237 05:38:05 -- common/autotest_common.sh@1220 -- # return 0 00:23:02.237 05:38:05 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:23:02.237 05:38:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.237 05:38:05 -- common/autotest_common.sh@10 -- # set +x 00:23:02.498 05:38:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.498 05:38:05 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:02.498 05:38:05 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:23:02.498 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:23:02.498 05:38:05 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:23:02.498 05:38:05 -- common/autotest_common.sh@1208 -- # local i=0 00:23:02.498 05:38:05 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:23:02.498 05:38:05 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:23:02.498 05:38:05 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:23:02.498 05:38:05 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:23:02.498 05:38:05 -- common/autotest_common.sh@1220 -- # return 0 00:23:02.498 05:38:05 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:23:02.498 05:38:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.498 05:38:05 -- common/autotest_common.sh@10 -- # set +x 00:23:02.498 05:38:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.498 05:38:05 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:02.498 05:38:05 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:23:02.759 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:23:02.759 05:38:05 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:23:02.759 05:38:05 -- common/autotest_common.sh@1208 -- # local i=0 00:23:02.759 05:38:05 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:23:02.759 05:38:05 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:23:02.759 05:38:05 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:23:02.759 05:38:05 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:23:02.759 05:38:05 -- common/autotest_common.sh@1220 -- # return 0 00:23:02.759 05:38:05 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:23:02.759 05:38:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.759 05:38:05 -- common/autotest_common.sh@10 -- # set +x 00:23:02.759 05:38:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.759 05:38:05 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:02.759 05:38:05 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:23:03.019 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:23:03.019 05:38:06 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:23:03.019 05:38:06 -- common/autotest_common.sh@1208 -- # local i=0 00:23:03.019 05:38:06 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:23:03.019 05:38:06 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:23:03.019 05:38:06 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:23:03.019 05:38:06 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:23:03.019 05:38:06 -- common/autotest_common.sh@1220 -- # return 0 00:23:03.019 05:38:06 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:23:03.019 05:38:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.019 05:38:06 -- common/autotest_common.sh@10 -- # set +x 00:23:03.019 05:38:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.019 05:38:06 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:03.019 05:38:06 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:23:03.281 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:23:03.281 05:38:06 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:23:03.281 05:38:06 -- common/autotest_common.sh@1208 -- # local i=0 00:23:03.281 05:38:06 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:23:03.281 05:38:06 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:23:03.281 05:38:06 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:23:03.281 05:38:06 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:23:03.281 05:38:06 -- common/autotest_common.sh@1220 -- # return 0 00:23:03.281 05:38:06 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:23:03.281 05:38:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.281 05:38:06 -- common/autotest_common.sh@10 -- # set +x 00:23:03.281 05:38:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.281 05:38:06 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:03.281 05:38:06 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:23:03.281 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:23:03.281 05:38:06 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:23:03.281 05:38:06 -- common/autotest_common.sh@1208 -- # local i=0 00:23:03.281 05:38:06 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:23:03.281 05:38:06 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:23:03.281 05:38:06 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:23:03.281 05:38:06 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:23:03.281 05:38:06 -- common/autotest_common.sh@1220 -- # return 0 00:23:03.281 05:38:06 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:23:03.281 05:38:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.281 05:38:06 -- common/autotest_common.sh@10 -- # set +x 00:23:03.281 05:38:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.281 05:38:06 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:23:03.281 05:38:06 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:23:03.281 05:38:06 -- target/multiconnection.sh@47 -- # nvmftestfini 00:23:03.281 05:38:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:03.281 05:38:06 -- nvmf/common.sh@116 -- # sync 00:23:03.281 05:38:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:03.281 05:38:06 -- nvmf/common.sh@119 -- # set +e 00:23:03.281 05:38:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:03.281 05:38:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:03.281 rmmod nvme_tcp 00:23:03.281 rmmod nvme_fabrics 00:23:03.281 rmmod nvme_keyring 00:23:03.281 05:38:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:03.281 05:38:06 -- nvmf/common.sh@123 -- # set -e 00:23:03.281 05:38:06 -- nvmf/common.sh@124 -- # return 0 00:23:03.281 05:38:06 -- nvmf/common.sh@477 -- # '[' -n 1881110 ']' 00:23:03.281 05:38:06 -- nvmf/common.sh@478 -- # killprocess 1881110 00:23:03.281 05:38:06 -- common/autotest_common.sh@936 -- # '[' -z 1881110 ']' 00:23:03.281 05:38:06 -- common/autotest_common.sh@940 -- # kill -0 1881110 00:23:03.281 05:38:06 -- common/autotest_common.sh@941 -- # uname 00:23:03.281 05:38:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:03.281 05:38:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1881110 00:23:03.543 05:38:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:03.543 05:38:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:03.543 05:38:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1881110' 00:23:03.543 killing process with pid 1881110 00:23:03.543 05:38:06 -- common/autotest_common.sh@955 -- # kill 1881110 00:23:03.543 05:38:06 -- common/autotest_common.sh@960 -- # wait 1881110 00:23:03.804 05:38:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:03.804 05:38:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:03.804 05:38:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:03.804 05:38:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:03.804 05:38:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:03.804 05:38:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.804 05:38:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:03.804 05:38:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.721 05:38:08 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:05.721 00:23:05.721 real 1m17.645s 00:23:05.721 user 4m53.941s 00:23:05.721 sys 0m23.158s 00:23:05.721 05:38:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:05.721 05:38:08 -- common/autotest_common.sh@10 -- # set +x 00:23:05.721 ************************************ 00:23:05.721 END TEST nvmf_multiconnection 00:23:05.721 ************************************ 00:23:05.983 05:38:08 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:23:05.983 05:38:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:05.983 05:38:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:05.983 05:38:08 -- common/autotest_common.sh@10 -- # set +x 00:23:05.983 ************************************ 00:23:05.983 START TEST nvmf_initiator_timeout 00:23:05.983 ************************************ 00:23:05.983 05:38:08 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:23:05.983 * Looking for test storage... 00:23:05.983 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:05.983 05:38:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:05.983 05:38:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:05.983 05:38:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:05.983 05:38:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:05.983 05:38:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:05.983 05:38:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:05.983 05:38:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:05.983 05:38:09 -- scripts/common.sh@335 -- # IFS=.-: 00:23:05.983 05:38:09 -- scripts/common.sh@335 -- # read -ra ver1 00:23:05.983 05:38:09 -- scripts/common.sh@336 -- # IFS=.-: 00:23:05.983 05:38:09 -- scripts/common.sh@336 -- # read -ra ver2 00:23:05.983 05:38:09 -- scripts/common.sh@337 -- # local 'op=<' 00:23:05.983 05:38:09 -- scripts/common.sh@339 -- # ver1_l=2 00:23:05.983 05:38:09 -- scripts/common.sh@340 -- # ver2_l=1 00:23:05.983 05:38:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:05.983 05:38:09 -- scripts/common.sh@343 -- # case "$op" in 00:23:05.983 05:38:09 -- scripts/common.sh@344 -- # : 1 00:23:05.983 05:38:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:05.983 05:38:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:05.983 05:38:09 -- scripts/common.sh@364 -- # decimal 1 00:23:05.983 05:38:09 -- scripts/common.sh@352 -- # local d=1 00:23:05.983 05:38:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:05.983 05:38:09 -- scripts/common.sh@354 -- # echo 1 00:23:05.983 05:38:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:05.983 05:38:09 -- scripts/common.sh@365 -- # decimal 2 00:23:05.984 05:38:09 -- scripts/common.sh@352 -- # local d=2 00:23:05.984 05:38:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:05.984 05:38:09 -- scripts/common.sh@354 -- # echo 2 00:23:05.984 05:38:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:05.984 05:38:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:05.984 05:38:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:05.984 05:38:09 -- scripts/common.sh@367 -- # return 0 00:23:05.984 05:38:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:05.984 05:38:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:05.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.984 --rc genhtml_branch_coverage=1 00:23:05.984 --rc genhtml_function_coverage=1 00:23:05.984 --rc genhtml_legend=1 00:23:05.984 --rc geninfo_all_blocks=1 00:23:05.984 --rc geninfo_unexecuted_blocks=1 00:23:05.984 00:23:05.984 ' 00:23:05.984 05:38:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:05.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.984 --rc genhtml_branch_coverage=1 00:23:05.984 --rc genhtml_function_coverage=1 00:23:05.984 --rc genhtml_legend=1 00:23:05.984 --rc geninfo_all_blocks=1 00:23:05.984 --rc geninfo_unexecuted_blocks=1 00:23:05.984 00:23:05.984 ' 00:23:05.984 05:38:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:05.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.984 --rc genhtml_branch_coverage=1 00:23:05.984 --rc genhtml_function_coverage=1 00:23:05.984 --rc genhtml_legend=1 00:23:05.984 --rc geninfo_all_blocks=1 00:23:05.984 --rc geninfo_unexecuted_blocks=1 00:23:05.984 00:23:05.984 ' 00:23:05.984 05:38:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:05.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.984 --rc genhtml_branch_coverage=1 00:23:05.984 --rc genhtml_function_coverage=1 00:23:05.984 --rc genhtml_legend=1 00:23:05.984 --rc geninfo_all_blocks=1 00:23:05.984 --rc geninfo_unexecuted_blocks=1 00:23:05.984 00:23:05.984 ' 00:23:05.984 05:38:09 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:05.984 05:38:09 -- nvmf/common.sh@7 -- # uname -s 00:23:05.984 05:38:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.984 05:38:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.984 05:38:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.984 05:38:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.984 05:38:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.984 05:38:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.984 05:38:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.984 05:38:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.984 05:38:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.984 05:38:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.984 05:38:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:05.984 05:38:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:05.984 05:38:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.984 05:38:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.984 05:38:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:05.984 05:38:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:05.984 05:38:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.984 05:38:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.984 05:38:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.984 05:38:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.984 05:38:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.984 05:38:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.984 05:38:09 -- paths/export.sh@5 -- # export PATH 00:23:05.984 05:38:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.984 05:38:09 -- nvmf/common.sh@46 -- # : 0 00:23:05.984 05:38:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:05.984 05:38:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:05.984 05:38:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:05.984 05:38:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.984 05:38:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.984 05:38:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:05.984 05:38:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:05.984 05:38:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:05.984 05:38:09 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:05.984 05:38:09 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:05.984 05:38:09 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:23:05.984 05:38:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:05.984 05:38:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.984 05:38:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:05.984 05:38:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:05.984 05:38:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:05.984 05:38:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.984 05:38:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:05.984 05:38:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.984 05:38:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:05.984 05:38:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:05.984 05:38:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:05.984 05:38:09 -- common/autotest_common.sh@10 -- # set +x 00:23:14.122 05:38:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:14.123 05:38:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:14.123 05:38:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:14.123 05:38:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:14.123 05:38:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:14.123 05:38:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:14.123 05:38:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:14.123 05:38:16 -- nvmf/common.sh@294 -- # net_devs=() 00:23:14.123 05:38:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:14.123 05:38:16 -- nvmf/common.sh@295 -- # e810=() 00:23:14.123 05:38:16 -- nvmf/common.sh@295 -- # local -ga e810 00:23:14.123 05:38:16 -- nvmf/common.sh@296 -- # x722=() 00:23:14.123 05:38:16 -- nvmf/common.sh@296 -- # local -ga x722 00:23:14.123 05:38:16 -- nvmf/common.sh@297 -- # mlx=() 00:23:14.123 05:38:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:14.123 05:38:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:14.123 05:38:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:14.123 05:38:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:14.123 05:38:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:14.123 05:38:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:14.123 05:38:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:14.123 05:38:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:14.123 05:38:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:14.123 05:38:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:14.123 05:38:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:14.123 05:38:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:14.123 05:38:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:14.123 05:38:16 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:14.123 05:38:16 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:14.123 05:38:16 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:14.123 05:38:16 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:14.123 05:38:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:14.123 05:38:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:14.123 05:38:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:14.123 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:14.123 05:38:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:14.123 05:38:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:14.123 05:38:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.123 05:38:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.123 05:38:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:14.123 05:38:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:14.123 05:38:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:14.123 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:14.123 05:38:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:14.123 05:38:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:14.123 05:38:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.123 05:38:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.123 05:38:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:14.123 05:38:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:14.123 05:38:16 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:14.123 05:38:16 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:14.123 05:38:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:14.123 05:38:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.123 05:38:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:14.123 05:38:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.123 05:38:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:14.123 Found net devices under 0000:31:00.0: cvl_0_0 00:23:14.123 05:38:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.123 05:38:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:14.123 05:38:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.123 05:38:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:14.123 05:38:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.123 05:38:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:14.123 Found net devices under 0000:31:00.1: cvl_0_1 00:23:14.123 05:38:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.123 05:38:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:14.123 05:38:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:14.123 05:38:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:14.123 05:38:16 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:14.123 05:38:16 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:14.123 05:38:16 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:14.123 05:38:16 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:14.123 05:38:16 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:14.123 05:38:16 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:14.123 05:38:16 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:14.123 05:38:16 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:14.123 05:38:16 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:14.123 05:38:16 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:14.123 05:38:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:14.123 05:38:16 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:14.123 05:38:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:14.123 05:38:16 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:14.123 05:38:16 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:14.123 05:38:16 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:14.123 05:38:16 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:14.123 05:38:16 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:14.123 05:38:16 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:14.123 05:38:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:14.123 05:38:16 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:14.123 05:38:16 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:14.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:14.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.579 ms 00:23:14.123 00:23:14.123 --- 10.0.0.2 ping statistics --- 00:23:14.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.123 rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms 00:23:14.123 05:38:16 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:14.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:14.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:23:14.123 00:23:14.123 --- 10.0.0.1 ping statistics --- 00:23:14.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.123 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:23:14.123 05:38:16 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:14.123 05:38:16 -- nvmf/common.sh@410 -- # return 0 00:23:14.123 05:38:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:14.123 05:38:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:14.123 05:38:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:14.123 05:38:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:14.123 05:38:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:14.123 05:38:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:14.123 05:38:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:14.123 05:38:16 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:23:14.123 05:38:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:14.123 05:38:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:14.123 05:38:16 -- common/autotest_common.sh@10 -- # set +x 00:23:14.123 05:38:16 -- nvmf/common.sh@469 -- # nvmfpid=1898962 00:23:14.123 05:38:16 -- nvmf/common.sh@470 -- # waitforlisten 1898962 00:23:14.123 05:38:16 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:14.123 05:38:16 -- common/autotest_common.sh@829 -- # '[' -z 1898962 ']' 00:23:14.123 05:38:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.123 05:38:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:14.123 05:38:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.123 05:38:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:14.123 05:38:16 -- common/autotest_common.sh@10 -- # set +x 00:23:14.123 [2024-12-07 05:38:16.682361] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:14.123 [2024-12-07 05:38:16.682411] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.123 EAL: No free 2048 kB hugepages reported on node 1 00:23:14.123 [2024-12-07 05:38:16.753978] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:14.123 [2024-12-07 05:38:16.816889] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:14.123 [2024-12-07 05:38:16.817031] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.123 [2024-12-07 05:38:16.817041] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.123 [2024-12-07 05:38:16.817050] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.123 [2024-12-07 05:38:16.817091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.123 [2024-12-07 05:38:16.817117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:14.123 [2024-12-07 05:38:16.817245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.123 [2024-12-07 05:38:16.817245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:14.383 05:38:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:14.383 05:38:17 -- common/autotest_common.sh@862 -- # return 0 00:23:14.383 05:38:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:14.383 05:38:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:14.383 05:38:17 -- common/autotest_common.sh@10 -- # set +x 00:23:14.383 05:38:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.383 05:38:17 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:14.383 05:38:17 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:14.383 05:38:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.383 05:38:17 -- common/autotest_common.sh@10 -- # set +x 00:23:14.383 Malloc0 00:23:14.383 05:38:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.383 05:38:17 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:23:14.383 05:38:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.383 05:38:17 -- common/autotest_common.sh@10 -- # set +x 00:23:14.383 Delay0 00:23:14.383 05:38:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.383 05:38:17 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:14.383 05:38:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.383 05:38:17 -- common/autotest_common.sh@10 -- # set +x 00:23:14.383 [2024-12-07 05:38:17.540173] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.383 05:38:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.383 05:38:17 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:14.383 05:38:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.383 05:38:17 -- common/autotest_common.sh@10 -- # set +x 00:23:14.383 05:38:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.383 05:38:17 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:14.383 05:38:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.383 05:38:17 -- common/autotest_common.sh@10 -- # set +x 00:23:14.383 05:38:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.383 05:38:17 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:14.383 05:38:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.383 05:38:17 -- common/autotest_common.sh@10 -- # set +x 00:23:14.383 [2024-12-07 05:38:17.577174] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.383 05:38:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.383 05:38:17 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:16.290 05:38:19 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:23:16.290 05:38:19 -- common/autotest_common.sh@1187 -- # local i=0 00:23:16.290 05:38:19 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:23:16.290 05:38:19 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:23:16.290 05:38:19 -- common/autotest_common.sh@1194 -- # sleep 2 00:23:18.197 05:38:21 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:23:18.197 05:38:21 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:23:18.197 05:38:21 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:23:18.197 05:38:21 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:23:18.197 05:38:21 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:23:18.197 05:38:21 -- common/autotest_common.sh@1197 -- # return 0 00:23:18.197 05:38:21 -- target/initiator_timeout.sh@35 -- # fio_pid=1899764 00:23:18.197 05:38:21 -- target/initiator_timeout.sh@37 -- # sleep 3 00:23:18.197 05:38:21 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:23:18.197 [global] 00:23:18.197 thread=1 00:23:18.197 invalidate=1 00:23:18.197 rw=write 00:23:18.197 time_based=1 00:23:18.197 runtime=60 00:23:18.197 ioengine=libaio 00:23:18.197 direct=1 00:23:18.197 bs=4096 00:23:18.197 iodepth=1 00:23:18.197 norandommap=0 00:23:18.197 numjobs=1 00:23:18.197 00:23:18.197 verify_dump=1 00:23:18.197 verify_backlog=512 00:23:18.197 verify_state_save=0 00:23:18.197 do_verify=1 00:23:18.197 verify=crc32c-intel 00:23:18.197 [job0] 00:23:18.197 filename=/dev/nvme0n1 00:23:18.197 Could not set queue depth (nvme0n1) 00:23:18.458 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:18.458 fio-3.35 00:23:18.458 Starting 1 thread 00:23:21.004 05:38:24 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:23:21.004 05:38:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.004 05:38:24 -- common/autotest_common.sh@10 -- # set +x 00:23:21.004 true 00:23:21.004 05:38:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.004 05:38:24 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:23:21.004 05:38:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.004 05:38:24 -- common/autotest_common.sh@10 -- # set +x 00:23:21.004 true 00:23:21.004 05:38:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.004 05:38:24 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:23:21.004 05:38:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.004 05:38:24 -- common/autotest_common.sh@10 -- # set +x 00:23:21.004 true 00:23:21.004 05:38:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.004 05:38:24 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:23:21.004 05:38:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.004 05:38:24 -- common/autotest_common.sh@10 -- # set +x 00:23:21.004 true 00:23:21.004 05:38:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.004 05:38:24 -- target/initiator_timeout.sh@45 -- # sleep 3 00:23:24.301 05:38:27 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:23:24.301 05:38:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.301 05:38:27 -- common/autotest_common.sh@10 -- # set +x 00:23:24.301 true 00:23:24.301 05:38:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.301 05:38:27 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:23:24.301 05:38:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.301 05:38:27 -- common/autotest_common.sh@10 -- # set +x 00:23:24.301 true 00:23:24.301 05:38:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.301 05:38:27 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:23:24.301 05:38:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.301 05:38:27 -- common/autotest_common.sh@10 -- # set +x 00:23:24.301 true 00:23:24.301 05:38:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.301 05:38:27 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:23:24.301 05:38:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.301 05:38:27 -- common/autotest_common.sh@10 -- # set +x 00:23:24.301 true 00:23:24.301 05:38:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.301 05:38:27 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:23:24.301 05:38:27 -- target/initiator_timeout.sh@54 -- # wait 1899764 00:24:20.568 00:24:20.568 job0: (groupid=0, jobs=1): err= 0: pid=1900075: Sat Dec 7 05:39:21 2024 00:24:20.568 read: IOPS=117, BW=472KiB/s (483kB/s)(27.7MiB/60004msec) 00:24:20.568 slat (usec): min=6, max=10202, avg=27.25, stdev=126.05 00:24:20.568 clat (usec): min=325, max=42078k, avg=7816.90, stdev=500100.83 00:24:20.568 lat (usec): min=333, max=42078k, avg=7844.15, stdev=500100.94 00:24:20.568 clat percentiles (usec): 00:24:20.568 | 1.00th=[ 515], 5.00th=[ 611], 10.00th=[ 660], 00:24:20.568 | 20.00th=[ 717], 30.00th=[ 758], 40.00th=[ 807], 00:24:20.568 | 50.00th=[ 840], 60.00th=[ 873], 70.00th=[ 906], 00:24:20.568 | 80.00th=[ 988], 90.00th=[ 1057], 95.00th=[ 1090], 00:24:20.568 | 99.00th=[ 41681], 99.50th=[ 42206], 99.90th=[ 42730], 00:24:20.568 | 99.95th=[ 43254], 99.99th=[17112761] 00:24:20.568 write: IOPS=119, BW=478KiB/s (489kB/s)(28.0MiB/60004msec); 0 zone resets 00:24:20.568 slat (nsec): min=8626, max=82304, avg=29881.53, stdev=10476.82 00:24:20.568 clat (usec): min=167, max=1192, avg=577.72, stdev=123.85 00:24:20.568 lat (usec): min=177, max=1204, avg=607.60, stdev=128.43 00:24:20.568 clat percentiles (usec): 00:24:20.568 | 1.00th=[ 235], 5.00th=[ 355], 10.00th=[ 424], 20.00th=[ 478], 00:24:20.568 | 30.00th=[ 529], 40.00th=[ 553], 50.00th=[ 578], 60.00th=[ 611], 00:24:20.568 | 70.00th=[ 652], 80.00th=[ 676], 90.00th=[ 709], 95.00th=[ 783], 00:24:20.568 | 99.00th=[ 873], 99.50th=[ 889], 99.90th=[ 938], 99.95th=[ 971], 00:24:20.568 | 99.99th=[ 1188] 00:24:20.568 bw ( KiB/s): min= 1000, max= 4096, per=100.00%, avg=2730.67, stdev=1103.42, samples=21 00:24:20.568 iops : min= 250, max= 1024, avg=682.67, stdev=275.86, samples=21 00:24:20.568 lat (usec) : 250=0.69%, 500=11.70%, 750=48.78%, 1000=29.59% 00:24:20.568 lat (msec) : 2=7.97%, 50=1.26%, >=2000=0.01% 00:24:20.568 cpu : usr=0.36%, sys=0.68%, ctx=14254, majf=0, minf=1 00:24:20.568 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:20.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.568 issued rwts: total=7080,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.568 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:20.568 00:24:20.568 Run status group 0 (all jobs): 00:24:20.568 READ: bw=472KiB/s (483kB/s), 472KiB/s-472KiB/s (483kB/s-483kB/s), io=27.7MiB (29.0MB), run=60004-60004msec 00:24:20.568 WRITE: bw=478KiB/s (489kB/s), 478KiB/s-478KiB/s (489kB/s-489kB/s), io=28.0MiB (29.4MB), run=60004-60004msec 00:24:20.568 00:24:20.568 Disk stats (read/write): 00:24:20.568 nvme0n1: ios=7169/7168, merge=0/0, ticks=13009/3965, in_queue=16974, util=99.58% 00:24:20.568 05:39:21 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:20.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:20.568 05:39:21 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:20.568 05:39:21 -- common/autotest_common.sh@1208 -- # local i=0 00:24:20.568 05:39:21 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:24:20.568 05:39:21 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:20.568 05:39:21 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:20.568 05:39:21 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:24:20.568 05:39:21 -- common/autotest_common.sh@1220 -- # return 0 00:24:20.568 05:39:21 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:24:20.568 05:39:21 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:24:20.568 nvmf hotplug test: fio successful as expected 00:24:20.568 05:39:21 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:20.568 05:39:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.568 05:39:21 -- common/autotest_common.sh@10 -- # set +x 00:24:20.568 05:39:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.568 05:39:21 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:24:20.569 05:39:21 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:24:20.569 05:39:21 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:24:20.569 05:39:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:20.569 05:39:21 -- nvmf/common.sh@116 -- # sync 00:24:20.569 05:39:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:20.569 05:39:21 -- nvmf/common.sh@119 -- # set +e 00:24:20.569 05:39:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:20.569 05:39:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:20.569 rmmod nvme_tcp 00:24:20.569 rmmod nvme_fabrics 00:24:20.569 rmmod nvme_keyring 00:24:20.569 05:39:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:20.569 05:39:21 -- nvmf/common.sh@123 -- # set -e 00:24:20.569 05:39:21 -- nvmf/common.sh@124 -- # return 0 00:24:20.569 05:39:21 -- nvmf/common.sh@477 -- # '[' -n 1898962 ']' 00:24:20.569 05:39:21 -- nvmf/common.sh@478 -- # killprocess 1898962 00:24:20.569 05:39:21 -- common/autotest_common.sh@936 -- # '[' -z 1898962 ']' 00:24:20.569 05:39:21 -- common/autotest_common.sh@940 -- # kill -0 1898962 00:24:20.569 05:39:21 -- common/autotest_common.sh@941 -- # uname 00:24:20.569 05:39:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:20.569 05:39:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1898962 00:24:20.569 05:39:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:20.569 05:39:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:20.569 05:39:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1898962' 00:24:20.569 killing process with pid 1898962 00:24:20.569 05:39:21 -- common/autotest_common.sh@955 -- # kill 1898962 00:24:20.569 05:39:21 -- common/autotest_common.sh@960 -- # wait 1898962 00:24:20.569 05:39:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:20.569 05:39:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:20.569 05:39:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:20.569 05:39:22 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:20.569 05:39:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:20.569 05:39:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.569 05:39:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:20.569 05:39:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.142 05:39:24 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:21.142 00:24:21.142 real 1m15.130s 00:24:21.142 user 4m35.810s 00:24:21.142 sys 0m8.123s 00:24:21.142 05:39:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:21.142 05:39:24 -- common/autotest_common.sh@10 -- # set +x 00:24:21.142 ************************************ 00:24:21.142 END TEST nvmf_initiator_timeout 00:24:21.142 ************************************ 00:24:21.142 05:39:24 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:24:21.142 05:39:24 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:24:21.142 05:39:24 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:24:21.142 05:39:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:21.142 05:39:24 -- common/autotest_common.sh@10 -- # set +x 00:24:27.730 05:39:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:27.730 05:39:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:27.730 05:39:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:27.730 05:39:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:27.730 05:39:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:27.730 05:39:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:27.730 05:39:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:27.730 05:39:30 -- nvmf/common.sh@294 -- # net_devs=() 00:24:27.730 05:39:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:27.730 05:39:30 -- nvmf/common.sh@295 -- # e810=() 00:24:27.730 05:39:30 -- nvmf/common.sh@295 -- # local -ga e810 00:24:27.730 05:39:30 -- nvmf/common.sh@296 -- # x722=() 00:24:27.730 05:39:30 -- nvmf/common.sh@296 -- # local -ga x722 00:24:27.730 05:39:30 -- nvmf/common.sh@297 -- # mlx=() 00:24:27.730 05:39:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:27.730 05:39:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:27.730 05:39:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:27.730 05:39:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:27.730 05:39:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:27.730 05:39:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:27.730 05:39:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:27.730 05:39:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:27.730 05:39:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:27.730 05:39:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:27.730 05:39:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:27.730 05:39:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:27.730 05:39:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:27.730 05:39:30 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:27.730 05:39:30 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:27.730 05:39:30 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:27.730 05:39:30 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:27.730 05:39:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:27.730 05:39:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:27.730 05:39:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:27.730 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:27.730 05:39:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:27.730 05:39:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:27.730 05:39:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.730 05:39:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.730 05:39:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:27.730 05:39:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:27.730 05:39:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:27.730 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:27.730 05:39:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:27.730 05:39:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:27.730 05:39:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.730 05:39:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.730 05:39:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:27.730 05:39:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:27.730 05:39:30 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:27.730 05:39:30 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:27.730 05:39:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:27.730 05:39:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.730 05:39:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:27.730 05:39:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.730 05:39:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:27.730 Found net devices under 0000:31:00.0: cvl_0_0 00:24:27.730 05:39:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.730 05:39:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:27.730 05:39:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.730 05:39:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:27.730 05:39:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.730 05:39:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:27.730 Found net devices under 0000:31:00.1: cvl_0_1 00:24:27.730 05:39:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.730 05:39:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:27.730 05:39:30 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:27.730 05:39:30 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:24:27.730 05:39:30 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:27.730 05:39:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:27.730 05:39:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:27.730 05:39:30 -- common/autotest_common.sh@10 -- # set +x 00:24:27.730 ************************************ 00:24:27.730 START TEST nvmf_perf_adq 00:24:27.730 ************************************ 00:24:27.730 05:39:30 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:27.730 * Looking for test storage... 00:24:27.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:27.730 05:39:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:27.730 05:39:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:27.730 05:39:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:27.730 05:39:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:27.730 05:39:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:27.730 05:39:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:27.730 05:39:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:27.730 05:39:30 -- scripts/common.sh@335 -- # IFS=.-: 00:24:27.730 05:39:30 -- scripts/common.sh@335 -- # read -ra ver1 00:24:27.730 05:39:30 -- scripts/common.sh@336 -- # IFS=.-: 00:24:27.730 05:39:30 -- scripts/common.sh@336 -- # read -ra ver2 00:24:27.730 05:39:30 -- scripts/common.sh@337 -- # local 'op=<' 00:24:27.730 05:39:30 -- scripts/common.sh@339 -- # ver1_l=2 00:24:27.730 05:39:30 -- scripts/common.sh@340 -- # ver2_l=1 00:24:27.731 05:39:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:27.731 05:39:30 -- scripts/common.sh@343 -- # case "$op" in 00:24:27.731 05:39:30 -- scripts/common.sh@344 -- # : 1 00:24:27.731 05:39:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:27.731 05:39:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:27.991 05:39:30 -- scripts/common.sh@364 -- # decimal 1 00:24:27.992 05:39:30 -- scripts/common.sh@352 -- # local d=1 00:24:27.992 05:39:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:27.992 05:39:30 -- scripts/common.sh@354 -- # echo 1 00:24:27.992 05:39:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:27.992 05:39:30 -- scripts/common.sh@365 -- # decimal 2 00:24:27.992 05:39:30 -- scripts/common.sh@352 -- # local d=2 00:24:27.992 05:39:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:27.992 05:39:30 -- scripts/common.sh@354 -- # echo 2 00:24:27.992 05:39:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:27.992 05:39:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:27.992 05:39:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:27.992 05:39:30 -- scripts/common.sh@367 -- # return 0 00:24:27.992 05:39:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:27.992 05:39:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:27.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.992 --rc genhtml_branch_coverage=1 00:24:27.992 --rc genhtml_function_coverage=1 00:24:27.992 --rc genhtml_legend=1 00:24:27.992 --rc geninfo_all_blocks=1 00:24:27.992 --rc geninfo_unexecuted_blocks=1 00:24:27.992 00:24:27.992 ' 00:24:27.992 05:39:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:27.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.992 --rc genhtml_branch_coverage=1 00:24:27.992 --rc genhtml_function_coverage=1 00:24:27.992 --rc genhtml_legend=1 00:24:27.992 --rc geninfo_all_blocks=1 00:24:27.992 --rc geninfo_unexecuted_blocks=1 00:24:27.992 00:24:27.992 ' 00:24:27.992 05:39:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:27.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.992 --rc genhtml_branch_coverage=1 00:24:27.992 --rc genhtml_function_coverage=1 00:24:27.992 --rc genhtml_legend=1 00:24:27.992 --rc geninfo_all_blocks=1 00:24:27.992 --rc geninfo_unexecuted_blocks=1 00:24:27.992 00:24:27.992 ' 00:24:27.992 05:39:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:27.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.992 --rc genhtml_branch_coverage=1 00:24:27.992 --rc genhtml_function_coverage=1 00:24:27.992 --rc genhtml_legend=1 00:24:27.992 --rc geninfo_all_blocks=1 00:24:27.992 --rc geninfo_unexecuted_blocks=1 00:24:27.992 00:24:27.992 ' 00:24:27.992 05:39:30 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:27.992 05:39:30 -- nvmf/common.sh@7 -- # uname -s 00:24:27.992 05:39:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:27.992 05:39:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:27.992 05:39:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:27.992 05:39:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:27.992 05:39:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:27.992 05:39:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:27.992 05:39:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:27.992 05:39:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:27.992 05:39:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:27.992 05:39:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:27.992 05:39:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:27.992 05:39:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:27.992 05:39:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:27.992 05:39:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:27.992 05:39:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:27.992 05:39:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:27.992 05:39:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:27.992 05:39:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:27.992 05:39:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:27.992 05:39:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.992 05:39:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.992 05:39:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.992 05:39:31 -- paths/export.sh@5 -- # export PATH 00:24:27.992 05:39:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.992 05:39:31 -- nvmf/common.sh@46 -- # : 0 00:24:27.992 05:39:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:27.992 05:39:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:27.992 05:39:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:27.992 05:39:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:27.992 05:39:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:27.992 05:39:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:27.992 05:39:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:27.992 05:39:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:27.992 05:39:31 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:24:27.992 05:39:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:27.992 05:39:31 -- common/autotest_common.sh@10 -- # set +x 00:24:36.135 05:39:38 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:36.135 05:39:38 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:36.135 05:39:38 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:36.135 05:39:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:36.135 05:39:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:36.135 05:39:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:36.135 05:39:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:36.135 05:39:38 -- nvmf/common.sh@294 -- # net_devs=() 00:24:36.135 05:39:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:36.135 05:39:38 -- nvmf/common.sh@295 -- # e810=() 00:24:36.135 05:39:38 -- nvmf/common.sh@295 -- # local -ga e810 00:24:36.135 05:39:38 -- nvmf/common.sh@296 -- # x722=() 00:24:36.135 05:39:38 -- nvmf/common.sh@296 -- # local -ga x722 00:24:36.135 05:39:38 -- nvmf/common.sh@297 -- # mlx=() 00:24:36.135 05:39:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:36.135 05:39:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:36.135 05:39:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:36.135 05:39:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:36.135 05:39:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:36.135 05:39:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:36.135 05:39:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:36.135 05:39:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:36.135 05:39:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:36.135 05:39:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:36.135 05:39:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:36.135 05:39:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:36.135 05:39:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:36.135 05:39:38 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:36.135 05:39:38 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:36.135 05:39:38 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:36.135 05:39:38 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:36.135 05:39:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:36.135 05:39:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:36.135 05:39:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:36.135 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:36.135 05:39:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:36.135 05:39:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:36.135 05:39:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.135 05:39:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.135 05:39:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:36.135 05:39:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:36.135 05:39:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:36.135 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:36.135 05:39:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:36.135 05:39:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:36.135 05:39:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.135 05:39:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.135 05:39:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:36.135 05:39:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:36.135 05:39:38 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:36.135 05:39:38 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:36.135 05:39:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:36.135 05:39:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.135 05:39:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:36.135 05:39:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.135 05:39:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:36.135 Found net devices under 0000:31:00.0: cvl_0_0 00:24:36.135 05:39:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.135 05:39:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:36.135 05:39:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.135 05:39:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:36.135 05:39:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.135 05:39:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:36.135 Found net devices under 0000:31:00.1: cvl_0_1 00:24:36.135 05:39:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.135 05:39:38 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:36.135 05:39:38 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:36.135 05:39:38 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:24:36.135 05:39:38 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:36.135 05:39:38 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:24:36.135 05:39:38 -- target/perf_adq.sh@52 -- # rmmod ice 00:24:36.396 05:39:39 -- target/perf_adq.sh@53 -- # modprobe ice 00:24:38.942 05:39:41 -- target/perf_adq.sh@54 -- # sleep 5 00:24:44.232 05:39:46 -- target/perf_adq.sh@67 -- # nvmftestinit 00:24:44.232 05:39:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:44.232 05:39:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:44.232 05:39:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:44.232 05:39:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:44.232 05:39:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:44.232 05:39:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.232 05:39:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:44.232 05:39:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.232 05:39:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:44.232 05:39:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:44.232 05:39:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:44.232 05:39:46 -- common/autotest_common.sh@10 -- # set +x 00:24:44.232 05:39:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:44.232 05:39:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:44.232 05:39:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:44.232 05:39:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:44.232 05:39:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:44.232 05:39:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:44.232 05:39:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:44.232 05:39:46 -- nvmf/common.sh@294 -- # net_devs=() 00:24:44.232 05:39:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:44.232 05:39:46 -- nvmf/common.sh@295 -- # e810=() 00:24:44.232 05:39:46 -- nvmf/common.sh@295 -- # local -ga e810 00:24:44.232 05:39:46 -- nvmf/common.sh@296 -- # x722=() 00:24:44.232 05:39:46 -- nvmf/common.sh@296 -- # local -ga x722 00:24:44.232 05:39:46 -- nvmf/common.sh@297 -- # mlx=() 00:24:44.232 05:39:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:44.232 05:39:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:44.232 05:39:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:44.232 05:39:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:44.232 05:39:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:44.232 05:39:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:44.232 05:39:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:44.232 05:39:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:44.232 05:39:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:44.232 05:39:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:44.232 05:39:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:44.232 05:39:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:44.232 05:39:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:44.232 05:39:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:44.232 05:39:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:44.232 05:39:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:44.232 05:39:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:44.232 05:39:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:44.232 05:39:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:44.232 05:39:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:44.232 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:44.232 05:39:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:44.232 05:39:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:44.233 05:39:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.233 05:39:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.233 05:39:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:44.233 05:39:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:44.233 05:39:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:44.233 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:44.233 05:39:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:44.233 05:39:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:44.233 05:39:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.233 05:39:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.233 05:39:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:44.233 05:39:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:44.233 05:39:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:44.233 05:39:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:44.233 05:39:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:44.233 05:39:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.233 05:39:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:44.233 05:39:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.233 05:39:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:44.233 Found net devices under 0000:31:00.0: cvl_0_0 00:24:44.233 05:39:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.233 05:39:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:44.233 05:39:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.233 05:39:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:44.233 05:39:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.233 05:39:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:44.233 Found net devices under 0000:31:00.1: cvl_0_1 00:24:44.233 05:39:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.233 05:39:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:44.233 05:39:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:44.233 05:39:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:44.233 05:39:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:44.233 05:39:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:44.233 05:39:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:44.233 05:39:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:44.233 05:39:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:44.233 05:39:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:44.233 05:39:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:44.233 05:39:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:44.233 05:39:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:44.233 05:39:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:44.233 05:39:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:44.233 05:39:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:44.233 05:39:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:44.233 05:39:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:44.233 05:39:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:44.233 05:39:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:44.233 05:39:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:44.233 05:39:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:44.233 05:39:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:44.233 05:39:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:44.233 05:39:47 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:44.233 05:39:47 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:44.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:44.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:24:44.233 00:24:44.233 --- 10.0.0.2 ping statistics --- 00:24:44.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.233 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:24:44.233 05:39:47 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:44.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:44.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:24:44.233 00:24:44.233 --- 10.0.0.1 ping statistics --- 00:24:44.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.233 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:24:44.233 05:39:47 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:44.233 05:39:47 -- nvmf/common.sh@410 -- # return 0 00:24:44.233 05:39:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:44.233 05:39:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:44.233 05:39:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:44.233 05:39:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:44.233 05:39:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:44.233 05:39:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:44.233 05:39:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:44.233 05:39:47 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:44.233 05:39:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:44.233 05:39:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:44.233 05:39:47 -- common/autotest_common.sh@10 -- # set +x 00:24:44.233 05:39:47 -- nvmf/common.sh@469 -- # nvmfpid=1922047 00:24:44.233 05:39:47 -- nvmf/common.sh@470 -- # waitforlisten 1922047 00:24:44.233 05:39:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:44.233 05:39:47 -- common/autotest_common.sh@829 -- # '[' -z 1922047 ']' 00:24:44.233 05:39:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.233 05:39:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:44.233 05:39:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.233 05:39:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:44.233 05:39:47 -- common/autotest_common.sh@10 -- # set +x 00:24:44.233 [2024-12-07 05:39:47.159485] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:44.233 [2024-12-07 05:39:47.159553] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:44.233 EAL: No free 2048 kB hugepages reported on node 1 00:24:44.233 [2024-12-07 05:39:47.234196] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:44.233 [2024-12-07 05:39:47.307447] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:44.233 [2024-12-07 05:39:47.307579] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:44.233 [2024-12-07 05:39:47.307590] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:44.233 [2024-12-07 05:39:47.307599] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:44.233 [2024-12-07 05:39:47.307769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.233 [2024-12-07 05:39:47.307889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:44.233 [2024-12-07 05:39:47.308065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:44.233 [2024-12-07 05:39:47.308071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.803 05:39:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:44.803 05:39:47 -- common/autotest_common.sh@862 -- # return 0 00:24:44.803 05:39:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:44.803 05:39:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:44.803 05:39:47 -- common/autotest_common.sh@10 -- # set +x 00:24:44.803 05:39:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:44.803 05:39:47 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:24:44.803 05:39:47 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:24:44.803 05:39:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.803 05:39:47 -- common/autotest_common.sh@10 -- # set +x 00:24:44.803 05:39:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.803 05:39:48 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:24:44.803 05:39:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.803 05:39:48 -- common/autotest_common.sh@10 -- # set +x 00:24:45.064 05:39:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.064 05:39:48 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:24:45.064 05:39:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.064 05:39:48 -- common/autotest_common.sh@10 -- # set +x 00:24:45.064 [2024-12-07 05:39:48.096968] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:45.064 05:39:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.064 05:39:48 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:45.064 05:39:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.064 05:39:48 -- common/autotest_common.sh@10 -- # set +x 00:24:45.064 Malloc1 00:24:45.064 05:39:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.064 05:39:48 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:45.064 05:39:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.064 05:39:48 -- common/autotest_common.sh@10 -- # set +x 00:24:45.064 05:39:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.064 05:39:48 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:45.064 05:39:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.064 05:39:48 -- common/autotest_common.sh@10 -- # set +x 00:24:45.064 05:39:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.064 05:39:48 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:45.064 05:39:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.064 05:39:48 -- common/autotest_common.sh@10 -- # set +x 00:24:45.064 [2024-12-07 05:39:48.156372] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:45.064 05:39:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.064 05:39:48 -- target/perf_adq.sh@73 -- # perfpid=1922195 00:24:45.064 05:39:48 -- target/perf_adq.sh@74 -- # sleep 2 00:24:45.064 05:39:48 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:45.064 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.983 05:39:50 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:24:46.983 05:39:50 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:24:46.983 05:39:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.983 05:39:50 -- target/perf_adq.sh@76 -- # wc -l 00:24:46.983 05:39:50 -- common/autotest_common.sh@10 -- # set +x 00:24:46.983 05:39:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.983 05:39:50 -- target/perf_adq.sh@76 -- # count=4 00:24:46.983 05:39:50 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:24:46.983 05:39:50 -- target/perf_adq.sh@81 -- # wait 1922195 00:24:55.197 [2024-12-07 05:39:58.314832] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4fe0 is same with the state(5) to be set 00:24:55.197 Initializing NVMe Controllers 00:24:55.197 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:55.197 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:55.197 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:55.197 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:55.197 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:55.197 Initialization complete. Launching workers. 00:24:55.197 ======================================================== 00:24:55.198 Latency(us) 00:24:55.198 Device Information : IOPS MiB/s Average min max 00:24:55.198 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11690.00 45.66 5475.63 1129.83 8647.63 00:24:55.198 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15126.70 59.09 4237.73 987.52 45070.74 00:24:55.198 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13822.30 53.99 4630.58 783.54 45546.91 00:24:55.198 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 14411.40 56.29 4441.53 1027.97 11169.95 00:24:55.198 ======================================================== 00:24:55.198 Total : 55050.40 215.04 4652.59 783.54 45546.91 00:24:55.198 00:24:55.198 05:39:58 -- target/perf_adq.sh@82 -- # nvmftestfini 00:24:55.198 05:39:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:55.198 05:39:58 -- nvmf/common.sh@116 -- # sync 00:24:55.198 05:39:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:55.198 05:39:58 -- nvmf/common.sh@119 -- # set +e 00:24:55.198 05:39:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:55.198 05:39:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:55.198 rmmod nvme_tcp 00:24:55.198 rmmod nvme_fabrics 00:24:55.198 rmmod nvme_keyring 00:24:55.198 05:39:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:55.198 05:39:58 -- nvmf/common.sh@123 -- # set -e 00:24:55.198 05:39:58 -- nvmf/common.sh@124 -- # return 0 00:24:55.198 05:39:58 -- nvmf/common.sh@477 -- # '[' -n 1922047 ']' 00:24:55.198 05:39:58 -- nvmf/common.sh@478 -- # killprocess 1922047 00:24:55.198 05:39:58 -- common/autotest_common.sh@936 -- # '[' -z 1922047 ']' 00:24:55.198 05:39:58 -- common/autotest_common.sh@940 -- # kill -0 1922047 00:24:55.198 05:39:58 -- common/autotest_common.sh@941 -- # uname 00:24:55.198 05:39:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:55.198 05:39:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1922047 00:24:55.458 05:39:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:55.458 05:39:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:55.458 05:39:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1922047' 00:24:55.458 killing process with pid 1922047 00:24:55.458 05:39:58 -- common/autotest_common.sh@955 -- # kill 1922047 00:24:55.458 05:39:58 -- common/autotest_common.sh@960 -- # wait 1922047 00:24:55.458 05:39:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:55.458 05:39:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:55.458 05:39:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:55.458 05:39:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:55.458 05:39:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:55.458 05:39:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.458 05:39:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:55.458 05:39:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.000 05:40:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:58.000 05:40:00 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:24:58.000 05:40:00 -- target/perf_adq.sh@52 -- # rmmod ice 00:24:58.939 05:40:02 -- target/perf_adq.sh@53 -- # modprobe ice 00:25:00.849 05:40:04 -- target/perf_adq.sh@54 -- # sleep 5 00:25:06.159 05:40:09 -- target/perf_adq.sh@87 -- # nvmftestinit 00:25:06.159 05:40:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:06.159 05:40:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:06.159 05:40:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:06.159 05:40:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:06.159 05:40:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:06.159 05:40:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.159 05:40:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:06.159 05:40:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.159 05:40:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:06.159 05:40:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:06.159 05:40:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:06.159 05:40:09 -- common/autotest_common.sh@10 -- # set +x 00:25:06.159 05:40:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:06.159 05:40:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:06.159 05:40:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:06.159 05:40:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:06.159 05:40:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:06.159 05:40:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:06.159 05:40:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:06.159 05:40:09 -- nvmf/common.sh@294 -- # net_devs=() 00:25:06.159 05:40:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:06.159 05:40:09 -- nvmf/common.sh@295 -- # e810=() 00:25:06.159 05:40:09 -- nvmf/common.sh@295 -- # local -ga e810 00:25:06.159 05:40:09 -- nvmf/common.sh@296 -- # x722=() 00:25:06.159 05:40:09 -- nvmf/common.sh@296 -- # local -ga x722 00:25:06.159 05:40:09 -- nvmf/common.sh@297 -- # mlx=() 00:25:06.159 05:40:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:06.159 05:40:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:06.159 05:40:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:06.159 05:40:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:06.159 05:40:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:06.159 05:40:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:06.159 05:40:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:06.159 05:40:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:06.159 05:40:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:06.159 05:40:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:06.159 05:40:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:06.159 05:40:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:06.159 05:40:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:06.159 05:40:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:06.159 05:40:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:06.159 05:40:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:06.159 05:40:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:06.159 05:40:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:06.159 05:40:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:06.159 05:40:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:06.159 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:06.159 05:40:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:06.159 05:40:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:06.159 05:40:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:06.159 05:40:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:06.159 05:40:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:06.159 05:40:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:06.159 05:40:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:06.159 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:06.159 05:40:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:06.159 05:40:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:06.159 05:40:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:06.159 05:40:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:06.159 05:40:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:06.159 05:40:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:06.159 05:40:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:06.159 05:40:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:06.159 05:40:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:06.159 05:40:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:06.159 05:40:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:06.159 05:40:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:06.159 05:40:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:06.159 Found net devices under 0000:31:00.0: cvl_0_0 00:25:06.159 05:40:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:06.159 05:40:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:06.159 05:40:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:06.159 05:40:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:06.159 05:40:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:06.159 05:40:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:06.159 Found net devices under 0000:31:00.1: cvl_0_1 00:25:06.159 05:40:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:06.159 05:40:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:06.159 05:40:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:06.159 05:40:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:06.159 05:40:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:06.159 05:40:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:06.159 05:40:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:06.159 05:40:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:06.159 05:40:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:06.159 05:40:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:06.159 05:40:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:06.159 05:40:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:06.159 05:40:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:06.159 05:40:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:06.159 05:40:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:06.159 05:40:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:06.160 05:40:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:06.160 05:40:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:06.160 05:40:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:06.160 05:40:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:06.160 05:40:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:06.160 05:40:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:06.160 05:40:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:06.160 05:40:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:06.160 05:40:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:06.160 05:40:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:06.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:06.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.800 ms 00:25:06.420 00:25:06.420 --- 10.0.0.2 ping statistics --- 00:25:06.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.420 rtt min/avg/max/mdev = 0.800/0.800/0.800/0.000 ms 00:25:06.420 05:40:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:06.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:06.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:25:06.420 00:25:06.420 --- 10.0.0.1 ping statistics --- 00:25:06.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.420 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:25:06.420 05:40:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:06.420 05:40:09 -- nvmf/common.sh@410 -- # return 0 00:25:06.420 05:40:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:06.420 05:40:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:06.420 05:40:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:06.420 05:40:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:06.420 05:40:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:06.420 05:40:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:06.420 05:40:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:06.420 05:40:09 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:25:06.420 05:40:09 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:25:06.420 05:40:09 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:25:06.420 05:40:09 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:25:06.420 net.core.busy_poll = 1 00:25:06.420 05:40:09 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:25:06.420 net.core.busy_read = 1 00:25:06.420 05:40:09 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:25:06.420 05:40:09 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:25:06.680 05:40:09 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:25:06.680 05:40:09 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:25:06.680 05:40:09 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:25:06.680 05:40:09 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:06.680 05:40:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:06.680 05:40:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:06.680 05:40:09 -- common/autotest_common.sh@10 -- # set +x 00:25:06.680 05:40:09 -- nvmf/common.sh@469 -- # nvmfpid=1926867 00:25:06.680 05:40:09 -- nvmf/common.sh@470 -- # waitforlisten 1926867 00:25:06.680 05:40:09 -- common/autotest_common.sh@829 -- # '[' -z 1926867 ']' 00:25:06.680 05:40:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:06.680 05:40:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:06.680 05:40:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:06.680 05:40:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:06.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:06.680 05:40:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:06.680 05:40:09 -- common/autotest_common.sh@10 -- # set +x 00:25:06.681 [2024-12-07 05:40:09.810573] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:06.681 [2024-12-07 05:40:09.810642] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:06.681 EAL: No free 2048 kB hugepages reported on node 1 00:25:06.681 [2024-12-07 05:40:09.885670] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:06.941 [2024-12-07 05:40:09.958228] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:06.941 [2024-12-07 05:40:09.958370] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:06.941 [2024-12-07 05:40:09.958381] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:06.941 [2024-12-07 05:40:09.958390] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:06.941 [2024-12-07 05:40:09.958549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.941 [2024-12-07 05:40:09.958680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:06.941 [2024-12-07 05:40:09.958840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.941 [2024-12-07 05:40:09.958841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:07.512 05:40:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:07.512 05:40:10 -- common/autotest_common.sh@862 -- # return 0 00:25:07.512 05:40:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:07.512 05:40:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:07.512 05:40:10 -- common/autotest_common.sh@10 -- # set +x 00:25:07.512 05:40:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:07.512 05:40:10 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:25:07.512 05:40:10 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:25:07.512 05:40:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.512 05:40:10 -- common/autotest_common.sh@10 -- # set +x 00:25:07.512 05:40:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.512 05:40:10 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:25:07.512 05:40:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.512 05:40:10 -- common/autotest_common.sh@10 -- # set +x 00:25:07.512 05:40:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.512 05:40:10 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:25:07.512 05:40:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.512 05:40:10 -- common/autotest_common.sh@10 -- # set +x 00:25:07.512 [2024-12-07 05:40:10.720292] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:07.512 05:40:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.512 05:40:10 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:07.512 05:40:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.512 05:40:10 -- common/autotest_common.sh@10 -- # set +x 00:25:07.512 Malloc1 00:25:07.512 05:40:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.512 05:40:10 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:07.512 05:40:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.512 05:40:10 -- common/autotest_common.sh@10 -- # set +x 00:25:07.772 05:40:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.772 05:40:10 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:07.772 05:40:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.772 05:40:10 -- common/autotest_common.sh@10 -- # set +x 00:25:07.772 05:40:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.772 05:40:10 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:07.772 05:40:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.772 05:40:10 -- common/autotest_common.sh@10 -- # set +x 00:25:07.772 [2024-12-07 05:40:10.775809] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:07.772 05:40:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.772 05:40:10 -- target/perf_adq.sh@94 -- # perfpid=1926944 00:25:07.772 05:40:10 -- target/perf_adq.sh@95 -- # sleep 2 00:25:07.772 05:40:10 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:07.772 EAL: No free 2048 kB hugepages reported on node 1 00:25:09.686 05:40:12 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:25:09.686 05:40:12 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:25:09.686 05:40:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.686 05:40:12 -- target/perf_adq.sh@97 -- # wc -l 00:25:09.686 05:40:12 -- common/autotest_common.sh@10 -- # set +x 00:25:09.686 05:40:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.686 05:40:12 -- target/perf_adq.sh@97 -- # count=2 00:25:09.686 05:40:12 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:25:09.686 05:40:12 -- target/perf_adq.sh@103 -- # wait 1926944 00:25:17.821 Initializing NVMe Controllers 00:25:17.821 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:17.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:17.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:17.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:17.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:17.821 Initialization complete. Launching workers. 00:25:17.821 ======================================================== 00:25:17.821 Latency(us) 00:25:17.821 Device Information : IOPS MiB/s Average min max 00:25:17.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8614.40 33.65 7429.45 969.20 52926.66 00:25:17.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 18363.20 71.73 3484.72 875.46 45404.52 00:25:17.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7733.50 30.21 8275.46 736.92 51980.26 00:25:17.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7153.00 27.94 8958.03 988.38 53861.72 00:25:17.821 ======================================================== 00:25:17.821 Total : 41864.10 163.53 6116.60 736.92 53861.72 00:25:17.821 00:25:17.821 05:40:20 -- target/perf_adq.sh@104 -- # nvmftestfini 00:25:17.821 05:40:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:17.821 05:40:20 -- nvmf/common.sh@116 -- # sync 00:25:17.821 05:40:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:17.821 05:40:20 -- nvmf/common.sh@119 -- # set +e 00:25:17.821 05:40:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:17.821 05:40:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:17.821 rmmod nvme_tcp 00:25:17.821 rmmod nvme_fabrics 00:25:17.821 rmmod nvme_keyring 00:25:17.821 05:40:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:17.821 05:40:21 -- nvmf/common.sh@123 -- # set -e 00:25:17.821 05:40:21 -- nvmf/common.sh@124 -- # return 0 00:25:17.821 05:40:21 -- nvmf/common.sh@477 -- # '[' -n 1926867 ']' 00:25:17.821 05:40:21 -- nvmf/common.sh@478 -- # killprocess 1926867 00:25:17.821 05:40:21 -- common/autotest_common.sh@936 -- # '[' -z 1926867 ']' 00:25:17.821 05:40:21 -- common/autotest_common.sh@940 -- # kill -0 1926867 00:25:17.821 05:40:21 -- common/autotest_common.sh@941 -- # uname 00:25:17.821 05:40:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:17.821 05:40:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1926867 00:25:18.083 05:40:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:18.083 05:40:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:18.083 05:40:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1926867' 00:25:18.083 killing process with pid 1926867 00:25:18.083 05:40:21 -- common/autotest_common.sh@955 -- # kill 1926867 00:25:18.083 05:40:21 -- common/autotest_common.sh@960 -- # wait 1926867 00:25:18.083 05:40:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:18.083 05:40:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:18.083 05:40:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:18.083 05:40:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:18.083 05:40:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:18.083 05:40:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.083 05:40:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:18.083 05:40:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.389 05:40:24 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:21.389 05:40:24 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:25:21.389 00:25:21.389 real 0m53.541s 00:25:21.389 user 2m48.635s 00:25:21.389 sys 0m11.723s 00:25:21.389 05:40:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:21.389 05:40:24 -- common/autotest_common.sh@10 -- # set +x 00:25:21.389 ************************************ 00:25:21.389 END TEST nvmf_perf_adq 00:25:21.389 ************************************ 00:25:21.389 05:40:24 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:21.389 05:40:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:21.389 05:40:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:21.389 05:40:24 -- common/autotest_common.sh@10 -- # set +x 00:25:21.389 ************************************ 00:25:21.389 START TEST nvmf_shutdown 00:25:21.389 ************************************ 00:25:21.389 05:40:24 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:21.389 * Looking for test storage... 00:25:21.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:21.389 05:40:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:21.389 05:40:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:21.389 05:40:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:21.389 05:40:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:21.389 05:40:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:21.389 05:40:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:21.389 05:40:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:21.389 05:40:24 -- scripts/common.sh@335 -- # IFS=.-: 00:25:21.389 05:40:24 -- scripts/common.sh@335 -- # read -ra ver1 00:25:21.389 05:40:24 -- scripts/common.sh@336 -- # IFS=.-: 00:25:21.389 05:40:24 -- scripts/common.sh@336 -- # read -ra ver2 00:25:21.389 05:40:24 -- scripts/common.sh@337 -- # local 'op=<' 00:25:21.389 05:40:24 -- scripts/common.sh@339 -- # ver1_l=2 00:25:21.389 05:40:24 -- scripts/common.sh@340 -- # ver2_l=1 00:25:21.389 05:40:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:21.389 05:40:24 -- scripts/common.sh@343 -- # case "$op" in 00:25:21.389 05:40:24 -- scripts/common.sh@344 -- # : 1 00:25:21.389 05:40:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:21.389 05:40:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:21.389 05:40:24 -- scripts/common.sh@364 -- # decimal 1 00:25:21.389 05:40:24 -- scripts/common.sh@352 -- # local d=1 00:25:21.389 05:40:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:21.389 05:40:24 -- scripts/common.sh@354 -- # echo 1 00:25:21.389 05:40:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:21.389 05:40:24 -- scripts/common.sh@365 -- # decimal 2 00:25:21.389 05:40:24 -- scripts/common.sh@352 -- # local d=2 00:25:21.389 05:40:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:21.389 05:40:24 -- scripts/common.sh@354 -- # echo 2 00:25:21.389 05:40:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:21.389 05:40:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:21.389 05:40:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:21.389 05:40:24 -- scripts/common.sh@367 -- # return 0 00:25:21.389 05:40:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:21.389 05:40:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:21.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.389 --rc genhtml_branch_coverage=1 00:25:21.389 --rc genhtml_function_coverage=1 00:25:21.389 --rc genhtml_legend=1 00:25:21.389 --rc geninfo_all_blocks=1 00:25:21.389 --rc geninfo_unexecuted_blocks=1 00:25:21.389 00:25:21.389 ' 00:25:21.389 05:40:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:21.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.389 --rc genhtml_branch_coverage=1 00:25:21.389 --rc genhtml_function_coverage=1 00:25:21.389 --rc genhtml_legend=1 00:25:21.389 --rc geninfo_all_blocks=1 00:25:21.389 --rc geninfo_unexecuted_blocks=1 00:25:21.389 00:25:21.389 ' 00:25:21.389 05:40:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:21.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.389 --rc genhtml_branch_coverage=1 00:25:21.389 --rc genhtml_function_coverage=1 00:25:21.389 --rc genhtml_legend=1 00:25:21.389 --rc geninfo_all_blocks=1 00:25:21.389 --rc geninfo_unexecuted_blocks=1 00:25:21.389 00:25:21.389 ' 00:25:21.389 05:40:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:21.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.389 --rc genhtml_branch_coverage=1 00:25:21.389 --rc genhtml_function_coverage=1 00:25:21.389 --rc genhtml_legend=1 00:25:21.389 --rc geninfo_all_blocks=1 00:25:21.389 --rc geninfo_unexecuted_blocks=1 00:25:21.389 00:25:21.389 ' 00:25:21.389 05:40:24 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:21.389 05:40:24 -- nvmf/common.sh@7 -- # uname -s 00:25:21.389 05:40:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:21.389 05:40:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:21.389 05:40:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:21.389 05:40:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:21.389 05:40:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:21.389 05:40:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:21.389 05:40:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:21.389 05:40:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:21.389 05:40:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:21.389 05:40:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:21.389 05:40:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:21.389 05:40:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:21.389 05:40:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:21.389 05:40:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:21.389 05:40:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:21.389 05:40:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:21.389 05:40:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:21.389 05:40:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:21.389 05:40:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:21.389 05:40:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.389 05:40:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.389 05:40:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.389 05:40:24 -- paths/export.sh@5 -- # export PATH 00:25:21.389 05:40:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.389 05:40:24 -- nvmf/common.sh@46 -- # : 0 00:25:21.389 05:40:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:21.389 05:40:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:21.389 05:40:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:21.389 05:40:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:21.389 05:40:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:21.389 05:40:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:21.389 05:40:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:21.389 05:40:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:21.389 05:40:24 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:21.389 05:40:24 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:21.389 05:40:24 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:25:21.389 05:40:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:21.389 05:40:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:21.389 05:40:24 -- common/autotest_common.sh@10 -- # set +x 00:25:21.389 ************************************ 00:25:21.389 START TEST nvmf_shutdown_tc1 00:25:21.389 ************************************ 00:25:21.389 05:40:24 -- common/autotest_common.sh@1114 -- # nvmf_shutdown_tc1 00:25:21.389 05:40:24 -- target/shutdown.sh@74 -- # starttarget 00:25:21.389 05:40:24 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:21.389 05:40:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:21.389 05:40:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:21.389 05:40:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:21.389 05:40:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:21.389 05:40:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:21.389 05:40:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.389 05:40:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:21.389 05:40:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.389 05:40:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:21.389 05:40:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:21.389 05:40:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:21.389 05:40:24 -- common/autotest_common.sh@10 -- # set +x 00:25:29.527 05:40:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:29.527 05:40:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:29.527 05:40:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:29.527 05:40:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:29.527 05:40:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:29.527 05:40:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:29.527 05:40:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:29.527 05:40:31 -- nvmf/common.sh@294 -- # net_devs=() 00:25:29.527 05:40:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:29.527 05:40:31 -- nvmf/common.sh@295 -- # e810=() 00:25:29.527 05:40:31 -- nvmf/common.sh@295 -- # local -ga e810 00:25:29.527 05:40:31 -- nvmf/common.sh@296 -- # x722=() 00:25:29.527 05:40:31 -- nvmf/common.sh@296 -- # local -ga x722 00:25:29.527 05:40:31 -- nvmf/common.sh@297 -- # mlx=() 00:25:29.527 05:40:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:29.527 05:40:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:29.527 05:40:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:29.527 05:40:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:29.527 05:40:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:29.527 05:40:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:29.527 05:40:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:29.527 05:40:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:29.527 05:40:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:29.527 05:40:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:29.527 05:40:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:29.527 05:40:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:29.527 05:40:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:29.527 05:40:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:29.527 05:40:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:29.527 05:40:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:29.527 05:40:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:29.527 05:40:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:29.527 05:40:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:29.527 05:40:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:29.527 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:29.527 05:40:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:29.527 05:40:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:29.527 05:40:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.527 05:40:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.527 05:40:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:29.527 05:40:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:29.527 05:40:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:29.527 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:29.527 05:40:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:29.527 05:40:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:29.527 05:40:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.527 05:40:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.527 05:40:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:29.527 05:40:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:29.527 05:40:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:29.527 05:40:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:29.527 05:40:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:29.527 05:40:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.527 05:40:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:29.527 05:40:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.527 05:40:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:29.527 Found net devices under 0000:31:00.0: cvl_0_0 00:25:29.527 05:40:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.527 05:40:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:29.527 05:40:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.527 05:40:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:29.527 05:40:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.527 05:40:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:29.527 Found net devices under 0000:31:00.1: cvl_0_1 00:25:29.527 05:40:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.527 05:40:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:29.527 05:40:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:29.527 05:40:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:29.527 05:40:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:29.527 05:40:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:29.527 05:40:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:29.527 05:40:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:29.527 05:40:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:29.527 05:40:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:29.527 05:40:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:29.527 05:40:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:29.527 05:40:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:29.528 05:40:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:29.528 05:40:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:29.528 05:40:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:29.528 05:40:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:29.528 05:40:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:29.528 05:40:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:29.528 05:40:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:29.528 05:40:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:29.528 05:40:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:29.528 05:40:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:29.528 05:40:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:29.528 05:40:32 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:29.528 05:40:32 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:29.528 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:29.528 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.514 ms 00:25:29.528 00:25:29.528 --- 10.0.0.2 ping statistics --- 00:25:29.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.528 rtt min/avg/max/mdev = 0.514/0.514/0.514/0.000 ms 00:25:29.528 05:40:32 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:29.528 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:29.528 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:25:29.528 00:25:29.528 --- 10.0.0.1 ping statistics --- 00:25:29.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.528 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:25:29.528 05:40:32 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:29.528 05:40:32 -- nvmf/common.sh@410 -- # return 0 00:25:29.528 05:40:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:29.528 05:40:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:29.528 05:40:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:29.528 05:40:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:29.528 05:40:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:29.528 05:40:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:29.528 05:40:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:29.528 05:40:32 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:29.528 05:40:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:29.528 05:40:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:29.528 05:40:32 -- common/autotest_common.sh@10 -- # set +x 00:25:29.528 05:40:32 -- nvmf/common.sh@469 -- # nvmfpid=1933517 00:25:29.528 05:40:32 -- nvmf/common.sh@470 -- # waitforlisten 1933517 00:25:29.528 05:40:32 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:29.528 05:40:32 -- common/autotest_common.sh@829 -- # '[' -z 1933517 ']' 00:25:29.528 05:40:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:29.528 05:40:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:29.528 05:40:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:29.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:29.528 05:40:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:29.528 05:40:32 -- common/autotest_common.sh@10 -- # set +x 00:25:29.528 [2024-12-07 05:40:32.185735] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:29.528 [2024-12-07 05:40:32.185801] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:29.528 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.528 [2024-12-07 05:40:32.278986] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:29.528 [2024-12-07 05:40:32.372706] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:29.528 [2024-12-07 05:40:32.372854] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:29.528 [2024-12-07 05:40:32.372866] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:29.528 [2024-12-07 05:40:32.372874] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:29.528 [2024-12-07 05:40:32.373029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:29.528 [2024-12-07 05:40:32.373212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:29.528 [2024-12-07 05:40:32.373380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.528 [2024-12-07 05:40:32.373381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:29.789 05:40:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:29.789 05:40:32 -- common/autotest_common.sh@862 -- # return 0 00:25:29.789 05:40:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:29.789 05:40:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:29.790 05:40:32 -- common/autotest_common.sh@10 -- # set +x 00:25:29.790 05:40:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:29.790 05:40:33 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:29.790 05:40:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.790 05:40:33 -- common/autotest_common.sh@10 -- # set +x 00:25:29.790 [2024-12-07 05:40:33.025076] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:30.051 05:40:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.051 05:40:33 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:30.051 05:40:33 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:30.051 05:40:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:30.051 05:40:33 -- common/autotest_common.sh@10 -- # set +x 00:25:30.051 05:40:33 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:30.051 05:40:33 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:30.051 05:40:33 -- target/shutdown.sh@28 -- # cat 00:25:30.051 05:40:33 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:30.051 05:40:33 -- target/shutdown.sh@28 -- # cat 00:25:30.051 05:40:33 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:30.051 05:40:33 -- target/shutdown.sh@28 -- # cat 00:25:30.051 05:40:33 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:30.051 05:40:33 -- target/shutdown.sh@28 -- # cat 00:25:30.051 05:40:33 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:30.051 05:40:33 -- target/shutdown.sh@28 -- # cat 00:25:30.051 05:40:33 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:30.051 05:40:33 -- target/shutdown.sh@28 -- # cat 00:25:30.051 05:40:33 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:30.051 05:40:33 -- target/shutdown.sh@28 -- # cat 00:25:30.051 05:40:33 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:30.051 05:40:33 -- target/shutdown.sh@28 -- # cat 00:25:30.051 05:40:33 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:30.051 05:40:33 -- target/shutdown.sh@28 -- # cat 00:25:30.051 05:40:33 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:30.051 05:40:33 -- target/shutdown.sh@28 -- # cat 00:25:30.051 05:40:33 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:30.051 05:40:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.051 05:40:33 -- common/autotest_common.sh@10 -- # set +x 00:25:30.051 Malloc1 00:25:30.051 [2024-12-07 05:40:33.128519] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:30.051 Malloc2 00:25:30.051 Malloc3 00:25:30.051 Malloc4 00:25:30.051 Malloc5 00:25:30.316 Malloc6 00:25:30.316 Malloc7 00:25:30.316 Malloc8 00:25:30.316 Malloc9 00:25:30.316 Malloc10 00:25:30.316 05:40:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.316 05:40:33 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:30.316 05:40:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:30.316 05:40:33 -- common/autotest_common.sh@10 -- # set +x 00:25:30.316 05:40:33 -- target/shutdown.sh@78 -- # perfpid=1933907 00:25:30.316 05:40:33 -- target/shutdown.sh@79 -- # waitforlisten 1933907 /var/tmp/bdevperf.sock 00:25:30.316 05:40:33 -- common/autotest_common.sh@829 -- # '[' -z 1933907 ']' 00:25:30.316 05:40:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:30.316 05:40:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:30.316 05:40:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:30.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:30.316 05:40:33 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:25:30.316 05:40:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:30.316 05:40:33 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:30.316 05:40:33 -- common/autotest_common.sh@10 -- # set +x 00:25:30.316 05:40:33 -- nvmf/common.sh@520 -- # config=() 00:25:30.316 05:40:33 -- nvmf/common.sh@520 -- # local subsystem config 00:25:30.316 05:40:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:30.316 05:40:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:30.316 { 00:25:30.316 "params": { 00:25:30.316 "name": "Nvme$subsystem", 00:25:30.316 "trtype": "$TEST_TRANSPORT", 00:25:30.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:30.316 "adrfam": "ipv4", 00:25:30.316 "trsvcid": "$NVMF_PORT", 00:25:30.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:30.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:30.316 "hdgst": ${hdgst:-false}, 00:25:30.316 "ddgst": ${ddgst:-false} 00:25:30.316 }, 00:25:30.316 "method": "bdev_nvme_attach_controller" 00:25:30.316 } 00:25:30.316 EOF 00:25:30.316 )") 00:25:30.316 05:40:33 -- nvmf/common.sh@542 -- # cat 00:25:30.316 05:40:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:30.316 05:40:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:30.316 { 00:25:30.316 "params": { 00:25:30.316 "name": "Nvme$subsystem", 00:25:30.316 "trtype": "$TEST_TRANSPORT", 00:25:30.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:30.316 "adrfam": "ipv4", 00:25:30.316 "trsvcid": "$NVMF_PORT", 00:25:30.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:30.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:30.316 "hdgst": ${hdgst:-false}, 00:25:30.316 "ddgst": ${ddgst:-false} 00:25:30.316 }, 00:25:30.316 "method": "bdev_nvme_attach_controller" 00:25:30.316 } 00:25:30.316 EOF 00:25:30.316 )") 00:25:30.316 05:40:33 -- nvmf/common.sh@542 -- # cat 00:25:30.316 05:40:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:30.316 05:40:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:30.316 { 00:25:30.316 "params": { 00:25:30.316 "name": "Nvme$subsystem", 00:25:30.316 "trtype": "$TEST_TRANSPORT", 00:25:30.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:30.316 "adrfam": "ipv4", 00:25:30.316 "trsvcid": "$NVMF_PORT", 00:25:30.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:30.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:30.316 "hdgst": ${hdgst:-false}, 00:25:30.316 "ddgst": ${ddgst:-false} 00:25:30.316 }, 00:25:30.316 "method": "bdev_nvme_attach_controller" 00:25:30.316 } 00:25:30.316 EOF 00:25:30.316 )") 00:25:30.316 05:40:33 -- nvmf/common.sh@542 -- # cat 00:25:30.579 05:40:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:30.579 05:40:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:30.579 { 00:25:30.579 "params": { 00:25:30.579 "name": "Nvme$subsystem", 00:25:30.579 "trtype": "$TEST_TRANSPORT", 00:25:30.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:30.579 "adrfam": "ipv4", 00:25:30.579 "trsvcid": "$NVMF_PORT", 00:25:30.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:30.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:30.579 "hdgst": ${hdgst:-false}, 00:25:30.579 "ddgst": ${ddgst:-false} 00:25:30.579 }, 00:25:30.579 "method": "bdev_nvme_attach_controller" 00:25:30.579 } 00:25:30.579 EOF 00:25:30.579 )") 00:25:30.579 05:40:33 -- nvmf/common.sh@542 -- # cat 00:25:30.579 05:40:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:30.579 05:40:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:30.579 { 00:25:30.579 "params": { 00:25:30.579 "name": "Nvme$subsystem", 00:25:30.579 "trtype": "$TEST_TRANSPORT", 00:25:30.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:30.579 "adrfam": "ipv4", 00:25:30.579 "trsvcid": "$NVMF_PORT", 00:25:30.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:30.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:30.579 "hdgst": ${hdgst:-false}, 00:25:30.579 "ddgst": ${ddgst:-false} 00:25:30.579 }, 00:25:30.579 "method": "bdev_nvme_attach_controller" 00:25:30.579 } 00:25:30.579 EOF 00:25:30.579 )") 00:25:30.579 05:40:33 -- nvmf/common.sh@542 -- # cat 00:25:30.580 05:40:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:30.580 05:40:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:30.580 { 00:25:30.580 "params": { 00:25:30.580 "name": "Nvme$subsystem", 00:25:30.580 "trtype": "$TEST_TRANSPORT", 00:25:30.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:30.580 "adrfam": "ipv4", 00:25:30.580 "trsvcid": "$NVMF_PORT", 00:25:30.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:30.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:30.580 "hdgst": ${hdgst:-false}, 00:25:30.580 "ddgst": ${ddgst:-false} 00:25:30.580 }, 00:25:30.580 "method": "bdev_nvme_attach_controller" 00:25:30.580 } 00:25:30.580 EOF 00:25:30.580 )") 00:25:30.580 05:40:33 -- nvmf/common.sh@542 -- # cat 00:25:30.580 05:40:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:30.580 05:40:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:30.580 { 00:25:30.580 "params": { 00:25:30.580 "name": "Nvme$subsystem", 00:25:30.580 "trtype": "$TEST_TRANSPORT", 00:25:30.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:30.580 "adrfam": "ipv4", 00:25:30.580 "trsvcid": "$NVMF_PORT", 00:25:30.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:30.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:30.580 "hdgst": ${hdgst:-false}, 00:25:30.580 "ddgst": ${ddgst:-false} 00:25:30.580 }, 00:25:30.580 "method": "bdev_nvme_attach_controller" 00:25:30.580 } 00:25:30.580 EOF 00:25:30.580 )") 00:25:30.580 05:40:33 -- nvmf/common.sh@542 -- # cat 00:25:30.580 [2024-12-07 05:40:33.583953] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:30.580 [2024-12-07 05:40:33.584029] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:25:30.580 05:40:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:30.580 05:40:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:30.580 { 00:25:30.580 "params": { 00:25:30.580 "name": "Nvme$subsystem", 00:25:30.580 "trtype": "$TEST_TRANSPORT", 00:25:30.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:30.580 "adrfam": "ipv4", 00:25:30.580 "trsvcid": "$NVMF_PORT", 00:25:30.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:30.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:30.580 "hdgst": ${hdgst:-false}, 00:25:30.580 "ddgst": ${ddgst:-false} 00:25:30.580 }, 00:25:30.580 "method": "bdev_nvme_attach_controller" 00:25:30.580 } 00:25:30.580 EOF 00:25:30.580 )") 00:25:30.580 05:40:33 -- nvmf/common.sh@542 -- # cat 00:25:30.580 05:40:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:30.580 05:40:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:30.580 { 00:25:30.580 "params": { 00:25:30.580 "name": "Nvme$subsystem", 00:25:30.580 "trtype": "$TEST_TRANSPORT", 00:25:30.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:30.580 "adrfam": "ipv4", 00:25:30.580 "trsvcid": "$NVMF_PORT", 00:25:30.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:30.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:30.580 "hdgst": ${hdgst:-false}, 00:25:30.580 "ddgst": ${ddgst:-false} 00:25:30.580 }, 00:25:30.580 "method": "bdev_nvme_attach_controller" 00:25:30.580 } 00:25:30.580 EOF 00:25:30.580 )") 00:25:30.580 05:40:33 -- nvmf/common.sh@542 -- # cat 00:25:30.580 05:40:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:30.580 05:40:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:30.580 { 00:25:30.580 "params": { 00:25:30.580 "name": "Nvme$subsystem", 00:25:30.580 "trtype": "$TEST_TRANSPORT", 00:25:30.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:30.580 "adrfam": "ipv4", 00:25:30.580 "trsvcid": "$NVMF_PORT", 00:25:30.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:30.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:30.580 "hdgst": ${hdgst:-false}, 00:25:30.580 "ddgst": ${ddgst:-false} 00:25:30.580 }, 00:25:30.580 "method": "bdev_nvme_attach_controller" 00:25:30.580 } 00:25:30.580 EOF 00:25:30.580 )") 00:25:30.580 05:40:33 -- nvmf/common.sh@542 -- # cat 00:25:30.580 05:40:33 -- nvmf/common.sh@544 -- # jq . 00:25:30.580 EAL: No free 2048 kB hugepages reported on node 1 00:25:30.580 05:40:33 -- nvmf/common.sh@545 -- # IFS=, 00:25:30.580 05:40:33 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:30.580 "params": { 00:25:30.580 "name": "Nvme1", 00:25:30.580 "trtype": "tcp", 00:25:30.580 "traddr": "10.0.0.2", 00:25:30.580 "adrfam": "ipv4", 00:25:30.580 "trsvcid": "4420", 00:25:30.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:30.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:30.580 "hdgst": false, 00:25:30.580 "ddgst": false 00:25:30.580 }, 00:25:30.580 "method": "bdev_nvme_attach_controller" 00:25:30.580 },{ 00:25:30.580 "params": { 00:25:30.580 "name": "Nvme2", 00:25:30.580 "trtype": "tcp", 00:25:30.580 "traddr": "10.0.0.2", 00:25:30.580 "adrfam": "ipv4", 00:25:30.580 "trsvcid": "4420", 00:25:30.580 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:30.580 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:30.580 "hdgst": false, 00:25:30.580 "ddgst": false 00:25:30.580 }, 00:25:30.580 "method": "bdev_nvme_attach_controller" 00:25:30.580 },{ 00:25:30.580 "params": { 00:25:30.580 "name": "Nvme3", 00:25:30.580 "trtype": "tcp", 00:25:30.580 "traddr": "10.0.0.2", 00:25:30.580 "adrfam": "ipv4", 00:25:30.580 "trsvcid": "4420", 00:25:30.580 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:30.580 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:30.580 "hdgst": false, 00:25:30.580 "ddgst": false 00:25:30.580 }, 00:25:30.580 "method": "bdev_nvme_attach_controller" 00:25:30.580 },{ 00:25:30.580 "params": { 00:25:30.580 "name": "Nvme4", 00:25:30.580 "trtype": "tcp", 00:25:30.580 "traddr": "10.0.0.2", 00:25:30.580 "adrfam": "ipv4", 00:25:30.580 "trsvcid": "4420", 00:25:30.580 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:30.580 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:30.580 "hdgst": false, 00:25:30.580 "ddgst": false 00:25:30.580 }, 00:25:30.580 "method": "bdev_nvme_attach_controller" 00:25:30.580 },{ 00:25:30.580 "params": { 00:25:30.580 "name": "Nvme5", 00:25:30.580 "trtype": "tcp", 00:25:30.580 "traddr": "10.0.0.2", 00:25:30.580 "adrfam": "ipv4", 00:25:30.580 "trsvcid": "4420", 00:25:30.580 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:30.580 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:30.580 "hdgst": false, 00:25:30.580 "ddgst": false 00:25:30.580 }, 00:25:30.580 "method": "bdev_nvme_attach_controller" 00:25:30.580 },{ 00:25:30.580 "params": { 00:25:30.580 "name": "Nvme6", 00:25:30.580 "trtype": "tcp", 00:25:30.580 "traddr": "10.0.0.2", 00:25:30.580 "adrfam": "ipv4", 00:25:30.580 "trsvcid": "4420", 00:25:30.580 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:30.580 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:30.580 "hdgst": false, 00:25:30.580 "ddgst": false 00:25:30.580 }, 00:25:30.580 "method": "bdev_nvme_attach_controller" 00:25:30.580 },{ 00:25:30.580 "params": { 00:25:30.580 "name": "Nvme7", 00:25:30.580 "trtype": "tcp", 00:25:30.580 "traddr": "10.0.0.2", 00:25:30.580 "adrfam": "ipv4", 00:25:30.580 "trsvcid": "4420", 00:25:30.580 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:30.580 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:30.580 "hdgst": false, 00:25:30.580 "ddgst": false 00:25:30.580 }, 00:25:30.580 "method": "bdev_nvme_attach_controller" 00:25:30.580 },{ 00:25:30.580 "params": { 00:25:30.580 "name": "Nvme8", 00:25:30.580 "trtype": "tcp", 00:25:30.580 "traddr": "10.0.0.2", 00:25:30.580 "adrfam": "ipv4", 00:25:30.580 "trsvcid": "4420", 00:25:30.580 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:30.580 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:30.580 "hdgst": false, 00:25:30.580 "ddgst": false 00:25:30.580 }, 00:25:30.580 "method": "bdev_nvme_attach_controller" 00:25:30.580 },{ 00:25:30.580 "params": { 00:25:30.580 "name": "Nvme9", 00:25:30.580 "trtype": "tcp", 00:25:30.581 "traddr": "10.0.0.2", 00:25:30.581 "adrfam": "ipv4", 00:25:30.581 "trsvcid": "4420", 00:25:30.581 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:30.581 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:30.581 "hdgst": false, 00:25:30.581 "ddgst": false 00:25:30.581 }, 00:25:30.581 "method": "bdev_nvme_attach_controller" 00:25:30.581 },{ 00:25:30.581 "params": { 00:25:30.581 "name": "Nvme10", 00:25:30.581 "trtype": "tcp", 00:25:30.581 "traddr": "10.0.0.2", 00:25:30.581 "adrfam": "ipv4", 00:25:30.581 "trsvcid": "4420", 00:25:30.581 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:30.581 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:30.581 "hdgst": false, 00:25:30.581 "ddgst": false 00:25:30.581 }, 00:25:30.581 "method": "bdev_nvme_attach_controller" 00:25:30.581 }' 00:25:30.581 [2024-12-07 05:40:33.648328] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.581 [2024-12-07 05:40:33.710940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.967 05:40:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:31.967 05:40:35 -- common/autotest_common.sh@862 -- # return 0 00:25:31.967 05:40:35 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:31.967 05:40:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.967 05:40:35 -- common/autotest_common.sh@10 -- # set +x 00:25:31.967 05:40:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.967 05:40:35 -- target/shutdown.sh@83 -- # kill -9 1933907 00:25:31.967 05:40:35 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:25:31.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1933907 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:25:31.967 05:40:35 -- target/shutdown.sh@87 -- # sleep 1 00:25:32.905 05:40:36 -- target/shutdown.sh@88 -- # kill -0 1933517 00:25:32.905 05:40:36 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:25:32.905 05:40:36 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:32.905 05:40:36 -- nvmf/common.sh@520 -- # config=() 00:25:32.905 05:40:36 -- nvmf/common.sh@520 -- # local subsystem config 00:25:32.905 05:40:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:32.905 05:40:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:32.905 { 00:25:32.905 "params": { 00:25:32.905 "name": "Nvme$subsystem", 00:25:32.905 "trtype": "$TEST_TRANSPORT", 00:25:32.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.905 "adrfam": "ipv4", 00:25:32.905 "trsvcid": "$NVMF_PORT", 00:25:32.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.905 "hdgst": ${hdgst:-false}, 00:25:32.905 "ddgst": ${ddgst:-false} 00:25:32.905 }, 00:25:32.905 "method": "bdev_nvme_attach_controller" 00:25:32.905 } 00:25:32.905 EOF 00:25:32.905 )") 00:25:32.905 05:40:36 -- nvmf/common.sh@542 -- # cat 00:25:32.905 05:40:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:32.905 05:40:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:32.905 { 00:25:32.905 "params": { 00:25:32.905 "name": "Nvme$subsystem", 00:25:32.905 "trtype": "$TEST_TRANSPORT", 00:25:32.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.905 "adrfam": "ipv4", 00:25:32.905 "trsvcid": "$NVMF_PORT", 00:25:32.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.905 "hdgst": ${hdgst:-false}, 00:25:32.905 "ddgst": ${ddgst:-false} 00:25:32.905 }, 00:25:32.905 "method": "bdev_nvme_attach_controller" 00:25:32.905 } 00:25:32.905 EOF 00:25:32.905 )") 00:25:32.905 05:40:36 -- nvmf/common.sh@542 -- # cat 00:25:32.905 05:40:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:32.905 05:40:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:32.905 { 00:25:32.905 "params": { 00:25:32.905 "name": "Nvme$subsystem", 00:25:32.905 "trtype": "$TEST_TRANSPORT", 00:25:32.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.905 "adrfam": "ipv4", 00:25:32.905 "trsvcid": "$NVMF_PORT", 00:25:32.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.905 "hdgst": ${hdgst:-false}, 00:25:32.905 "ddgst": ${ddgst:-false} 00:25:32.905 }, 00:25:32.905 "method": "bdev_nvme_attach_controller" 00:25:32.905 } 00:25:32.905 EOF 00:25:32.905 )") 00:25:32.905 05:40:36 -- nvmf/common.sh@542 -- # cat 00:25:32.905 05:40:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:32.905 05:40:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:32.905 { 00:25:32.905 "params": { 00:25:32.905 "name": "Nvme$subsystem", 00:25:32.905 "trtype": "$TEST_TRANSPORT", 00:25:32.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.905 "adrfam": "ipv4", 00:25:32.905 "trsvcid": "$NVMF_PORT", 00:25:32.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.905 "hdgst": ${hdgst:-false}, 00:25:32.905 "ddgst": ${ddgst:-false} 00:25:32.905 }, 00:25:32.905 "method": "bdev_nvme_attach_controller" 00:25:32.905 } 00:25:32.905 EOF 00:25:32.905 )") 00:25:32.905 05:40:36 -- nvmf/common.sh@542 -- # cat 00:25:32.905 05:40:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:32.905 05:40:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:32.905 { 00:25:32.905 "params": { 00:25:32.905 "name": "Nvme$subsystem", 00:25:32.905 "trtype": "$TEST_TRANSPORT", 00:25:32.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.905 "adrfam": "ipv4", 00:25:32.905 "trsvcid": "$NVMF_PORT", 00:25:32.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.905 "hdgst": ${hdgst:-false}, 00:25:32.905 "ddgst": ${ddgst:-false} 00:25:32.905 }, 00:25:32.905 "method": "bdev_nvme_attach_controller" 00:25:32.905 } 00:25:32.905 EOF 00:25:32.905 )") 00:25:32.906 05:40:36 -- nvmf/common.sh@542 -- # cat 00:25:32.906 05:40:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:32.906 05:40:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:32.906 { 00:25:32.906 "params": { 00:25:32.906 "name": "Nvme$subsystem", 00:25:32.906 "trtype": "$TEST_TRANSPORT", 00:25:32.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.906 "adrfam": "ipv4", 00:25:32.906 "trsvcid": "$NVMF_PORT", 00:25:32.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.906 "hdgst": ${hdgst:-false}, 00:25:32.906 "ddgst": ${ddgst:-false} 00:25:32.906 }, 00:25:32.906 "method": "bdev_nvme_attach_controller" 00:25:32.906 } 00:25:32.906 EOF 00:25:32.906 )") 00:25:32.906 05:40:36 -- nvmf/common.sh@542 -- # cat 00:25:32.906 [2024-12-07 05:40:36.079406] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:32.906 [2024-12-07 05:40:36.079458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1934342 ] 00:25:32.906 05:40:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:32.906 05:40:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:32.906 { 00:25:32.906 "params": { 00:25:32.906 "name": "Nvme$subsystem", 00:25:32.906 "trtype": "$TEST_TRANSPORT", 00:25:32.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.906 "adrfam": "ipv4", 00:25:32.906 "trsvcid": "$NVMF_PORT", 00:25:32.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.906 "hdgst": ${hdgst:-false}, 00:25:32.906 "ddgst": ${ddgst:-false} 00:25:32.906 }, 00:25:32.906 "method": "bdev_nvme_attach_controller" 00:25:32.906 } 00:25:32.906 EOF 00:25:32.906 )") 00:25:32.906 05:40:36 -- nvmf/common.sh@542 -- # cat 00:25:32.906 05:40:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:32.906 05:40:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:32.906 { 00:25:32.906 "params": { 00:25:32.906 "name": "Nvme$subsystem", 00:25:32.906 "trtype": "$TEST_TRANSPORT", 00:25:32.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.906 "adrfam": "ipv4", 00:25:32.906 "trsvcid": "$NVMF_PORT", 00:25:32.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.906 "hdgst": ${hdgst:-false}, 00:25:32.906 "ddgst": ${ddgst:-false} 00:25:32.906 }, 00:25:32.906 "method": "bdev_nvme_attach_controller" 00:25:32.906 } 00:25:32.906 EOF 00:25:32.906 )") 00:25:32.906 05:40:36 -- nvmf/common.sh@542 -- # cat 00:25:32.906 05:40:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:32.906 05:40:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:32.906 { 00:25:32.906 "params": { 00:25:32.906 "name": "Nvme$subsystem", 00:25:32.906 "trtype": "$TEST_TRANSPORT", 00:25:32.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.906 "adrfam": "ipv4", 00:25:32.906 "trsvcid": "$NVMF_PORT", 00:25:32.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.906 "hdgst": ${hdgst:-false}, 00:25:32.906 "ddgst": ${ddgst:-false} 00:25:32.906 }, 00:25:32.906 "method": "bdev_nvme_attach_controller" 00:25:32.906 } 00:25:32.906 EOF 00:25:32.906 )") 00:25:32.906 05:40:36 -- nvmf/common.sh@542 -- # cat 00:25:32.906 05:40:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:32.906 05:40:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:32.906 { 00:25:32.906 "params": { 00:25:32.906 "name": "Nvme$subsystem", 00:25:32.906 "trtype": "$TEST_TRANSPORT", 00:25:32.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.906 "adrfam": "ipv4", 00:25:32.906 "trsvcid": "$NVMF_PORT", 00:25:32.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.906 "hdgst": ${hdgst:-false}, 00:25:32.906 "ddgst": ${ddgst:-false} 00:25:32.906 }, 00:25:32.906 "method": "bdev_nvme_attach_controller" 00:25:32.906 } 00:25:32.906 EOF 00:25:32.906 )") 00:25:32.906 EAL: No free 2048 kB hugepages reported on node 1 00:25:32.906 05:40:36 -- nvmf/common.sh@542 -- # cat 00:25:32.906 05:40:36 -- nvmf/common.sh@544 -- # jq . 00:25:32.906 05:40:36 -- nvmf/common.sh@545 -- # IFS=, 00:25:32.906 05:40:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:32.906 "params": { 00:25:32.906 "name": "Nvme1", 00:25:32.906 "trtype": "tcp", 00:25:32.906 "traddr": "10.0.0.2", 00:25:32.906 "adrfam": "ipv4", 00:25:32.906 "trsvcid": "4420", 00:25:32.906 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:32.906 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:32.906 "hdgst": false, 00:25:32.906 "ddgst": false 00:25:32.906 }, 00:25:32.906 "method": "bdev_nvme_attach_controller" 00:25:32.906 },{ 00:25:32.906 "params": { 00:25:32.906 "name": "Nvme2", 00:25:32.906 "trtype": "tcp", 00:25:32.906 "traddr": "10.0.0.2", 00:25:32.906 "adrfam": "ipv4", 00:25:32.906 "trsvcid": "4420", 00:25:32.906 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:32.906 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:32.906 "hdgst": false, 00:25:32.906 "ddgst": false 00:25:32.906 }, 00:25:32.906 "method": "bdev_nvme_attach_controller" 00:25:32.906 },{ 00:25:32.906 "params": { 00:25:32.906 "name": "Nvme3", 00:25:32.906 "trtype": "tcp", 00:25:32.906 "traddr": "10.0.0.2", 00:25:32.906 "adrfam": "ipv4", 00:25:32.906 "trsvcid": "4420", 00:25:32.906 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:32.906 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:32.906 "hdgst": false, 00:25:32.906 "ddgst": false 00:25:32.906 }, 00:25:32.906 "method": "bdev_nvme_attach_controller" 00:25:32.906 },{ 00:25:32.906 "params": { 00:25:32.906 "name": "Nvme4", 00:25:32.906 "trtype": "tcp", 00:25:32.906 "traddr": "10.0.0.2", 00:25:32.906 "adrfam": "ipv4", 00:25:32.906 "trsvcid": "4420", 00:25:32.906 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:32.906 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:32.906 "hdgst": false, 00:25:32.906 "ddgst": false 00:25:32.906 }, 00:25:32.906 "method": "bdev_nvme_attach_controller" 00:25:32.906 },{ 00:25:32.906 "params": { 00:25:32.906 "name": "Nvme5", 00:25:32.906 "trtype": "tcp", 00:25:32.906 "traddr": "10.0.0.2", 00:25:32.906 "adrfam": "ipv4", 00:25:32.906 "trsvcid": "4420", 00:25:32.906 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:32.906 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:32.906 "hdgst": false, 00:25:32.906 "ddgst": false 00:25:32.906 }, 00:25:32.906 "method": "bdev_nvme_attach_controller" 00:25:32.906 },{ 00:25:32.906 "params": { 00:25:32.906 "name": "Nvme6", 00:25:32.906 "trtype": "tcp", 00:25:32.906 "traddr": "10.0.0.2", 00:25:32.906 "adrfam": "ipv4", 00:25:32.906 "trsvcid": "4420", 00:25:32.906 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:32.906 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:32.906 "hdgst": false, 00:25:32.906 "ddgst": false 00:25:32.906 }, 00:25:32.906 "method": "bdev_nvme_attach_controller" 00:25:32.906 },{ 00:25:32.906 "params": { 00:25:32.906 "name": "Nvme7", 00:25:32.906 "trtype": "tcp", 00:25:32.906 "traddr": "10.0.0.2", 00:25:32.906 "adrfam": "ipv4", 00:25:32.906 "trsvcid": "4420", 00:25:32.906 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:32.906 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:32.906 "hdgst": false, 00:25:32.906 "ddgst": false 00:25:32.906 }, 00:25:32.906 "method": "bdev_nvme_attach_controller" 00:25:32.906 },{ 00:25:32.906 "params": { 00:25:32.906 "name": "Nvme8", 00:25:32.906 "trtype": "tcp", 00:25:32.906 "traddr": "10.0.0.2", 00:25:32.906 "adrfam": "ipv4", 00:25:32.906 "trsvcid": "4420", 00:25:32.906 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:32.906 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:32.906 "hdgst": false, 00:25:32.906 "ddgst": false 00:25:32.906 }, 00:25:32.906 "method": "bdev_nvme_attach_controller" 00:25:32.906 },{ 00:25:32.906 "params": { 00:25:32.906 "name": "Nvme9", 00:25:32.906 "trtype": "tcp", 00:25:32.906 "traddr": "10.0.0.2", 00:25:32.906 "adrfam": "ipv4", 00:25:32.906 "trsvcid": "4420", 00:25:32.906 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:32.906 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:32.906 "hdgst": false, 00:25:32.906 "ddgst": false 00:25:32.906 }, 00:25:32.906 "method": "bdev_nvme_attach_controller" 00:25:32.906 },{ 00:25:32.906 "params": { 00:25:32.906 "name": "Nvme10", 00:25:32.906 "trtype": "tcp", 00:25:32.906 "traddr": "10.0.0.2", 00:25:32.906 "adrfam": "ipv4", 00:25:32.906 "trsvcid": "4420", 00:25:32.906 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:32.906 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:32.906 "hdgst": false, 00:25:32.906 "ddgst": false 00:25:32.906 }, 00:25:32.906 "method": "bdev_nvme_attach_controller" 00:25:32.906 }' 00:25:32.906 [2024-12-07 05:40:36.141286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.166 [2024-12-07 05:40:36.203861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.549 Running I/O for 1 seconds... 00:25:35.931 00:25:35.931 Latency(us) 00:25:35.931 [2024-12-07T04:40:39.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:35.931 [2024-12-07T04:40:39.171Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:35.931 Verification LBA range: start 0x0 length 0x400 00:25:35.931 Nvme1n1 : 1.09 442.79 27.67 0.00 0.00 142187.91 19879.25 110100.48 00:25:35.931 [2024-12-07T04:40:39.171Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:35.931 Verification LBA range: start 0x0 length 0x400 00:25:35.931 Nvme2n1 : 1.07 406.09 25.38 0.00 0.00 153410.49 19988.48 125829.12 00:25:35.931 [2024-12-07T04:40:39.171Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:35.931 Verification LBA range: start 0x0 length 0x400 00:25:35.931 Nvme3n1 : 1.09 441.87 27.62 0.00 0.00 140480.31 18786.99 113595.73 00:25:35.931 [2024-12-07T04:40:39.171Z] Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:35.932 Verification LBA range: start 0x0 length 0x400 00:25:35.932 Nvme4n1 : 1.09 444.90 27.81 0.00 0.00 138427.39 18786.99 108789.76 00:25:35.932 [2024-12-07T04:40:39.172Z] Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:35.932 Verification LBA range: start 0x0 length 0x400 00:25:35.932 Nvme5n1 : 1.10 440.53 27.53 0.00 0.00 138713.00 18131.63 107479.04 00:25:35.932 [2024-12-07T04:40:39.172Z] Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:35.932 Verification LBA range: start 0x0 length 0x400 00:25:35.932 Nvme6n1 : 1.10 438.39 27.40 0.00 0.00 138383.06 19223.89 112721.92 00:25:35.932 [2024-12-07T04:40:39.172Z] Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:35.932 Verification LBA range: start 0x0 length 0x400 00:25:35.932 Nvme7n1 : 1.10 440.15 27.51 0.00 0.00 136489.57 20425.39 108789.76 00:25:35.932 [2024-12-07T04:40:39.172Z] Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:35.932 Verification LBA range: start 0x0 length 0x400 00:25:35.932 Nvme8n1 : 1.10 439.23 27.45 0.00 0.00 135950.90 19660.80 110974.29 00:25:35.932 [2024-12-07T04:40:39.172Z] Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:35.932 Verification LBA range: start 0x0 length 0x400 00:25:35.932 Nvme9n1 : 1.10 437.39 27.34 0.00 0.00 135928.29 14090.24 117964.80 00:25:35.932 [2024-12-07T04:40:39.172Z] Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:35.932 Verification LBA range: start 0x0 length 0x400 00:25:35.932 Nvme10n1 : 1.09 399.37 24.96 0.00 0.00 147317.77 11468.80 132819.63 00:25:35.932 [2024-12-07T04:40:39.172Z] =================================================================================================================== 00:25:35.932 [2024-12-07T04:40:39.172Z] Total : 4330.70 270.67 0.00 0.00 140529.32 11468.80 132819.63 00:25:35.932 05:40:38 -- target/shutdown.sh@93 -- # stoptarget 00:25:35.932 05:40:38 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:35.932 05:40:38 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:35.932 05:40:38 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:35.932 05:40:38 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:35.932 05:40:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:35.932 05:40:38 -- nvmf/common.sh@116 -- # sync 00:25:35.932 05:40:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:35.932 05:40:38 -- nvmf/common.sh@119 -- # set +e 00:25:35.932 05:40:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:35.932 05:40:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:35.932 rmmod nvme_tcp 00:25:35.932 rmmod nvme_fabrics 00:25:35.932 rmmod nvme_keyring 00:25:35.932 05:40:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:35.932 05:40:38 -- nvmf/common.sh@123 -- # set -e 00:25:35.932 05:40:38 -- nvmf/common.sh@124 -- # return 0 00:25:35.932 05:40:38 -- nvmf/common.sh@477 -- # '[' -n 1933517 ']' 00:25:35.932 05:40:38 -- nvmf/common.sh@478 -- # killprocess 1933517 00:25:35.932 05:40:38 -- common/autotest_common.sh@936 -- # '[' -z 1933517 ']' 00:25:35.932 05:40:38 -- common/autotest_common.sh@940 -- # kill -0 1933517 00:25:35.932 05:40:38 -- common/autotest_common.sh@941 -- # uname 00:25:35.932 05:40:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:35.932 05:40:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1933517 00:25:35.932 05:40:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:35.932 05:40:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:35.932 05:40:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1933517' 00:25:35.932 killing process with pid 1933517 00:25:35.932 05:40:39 -- common/autotest_common.sh@955 -- # kill 1933517 00:25:35.932 05:40:39 -- common/autotest_common.sh@960 -- # wait 1933517 00:25:36.192 05:40:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:36.192 05:40:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:36.192 05:40:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:36.192 05:40:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:36.192 05:40:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:36.192 05:40:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.192 05:40:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:36.192 05:40:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.103 05:40:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:38.103 00:25:38.103 real 0m16.718s 00:25:38.103 user 0m33.720s 00:25:38.103 sys 0m6.791s 00:25:38.103 05:40:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:38.103 05:40:41 -- common/autotest_common.sh@10 -- # set +x 00:25:38.103 ************************************ 00:25:38.103 END TEST nvmf_shutdown_tc1 00:25:38.103 ************************************ 00:25:38.364 05:40:41 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:25:38.365 05:40:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:38.365 05:40:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:38.365 05:40:41 -- common/autotest_common.sh@10 -- # set +x 00:25:38.365 ************************************ 00:25:38.365 START TEST nvmf_shutdown_tc2 00:25:38.365 ************************************ 00:25:38.365 05:40:41 -- common/autotest_common.sh@1114 -- # nvmf_shutdown_tc2 00:25:38.365 05:40:41 -- target/shutdown.sh@98 -- # starttarget 00:25:38.365 05:40:41 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:38.365 05:40:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:38.365 05:40:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:38.365 05:40:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:38.365 05:40:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:38.365 05:40:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:38.365 05:40:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.365 05:40:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:38.365 05:40:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.365 05:40:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:38.365 05:40:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:38.365 05:40:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:38.365 05:40:41 -- common/autotest_common.sh@10 -- # set +x 00:25:38.365 05:40:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:38.365 05:40:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:38.365 05:40:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:38.365 05:40:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:38.365 05:40:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:38.365 05:40:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:38.365 05:40:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:38.365 05:40:41 -- nvmf/common.sh@294 -- # net_devs=() 00:25:38.365 05:40:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:38.365 05:40:41 -- nvmf/common.sh@295 -- # e810=() 00:25:38.365 05:40:41 -- nvmf/common.sh@295 -- # local -ga e810 00:25:38.365 05:40:41 -- nvmf/common.sh@296 -- # x722=() 00:25:38.365 05:40:41 -- nvmf/common.sh@296 -- # local -ga x722 00:25:38.365 05:40:41 -- nvmf/common.sh@297 -- # mlx=() 00:25:38.365 05:40:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:38.365 05:40:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:38.365 05:40:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:38.365 05:40:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:38.365 05:40:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:38.365 05:40:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:38.365 05:40:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:38.365 05:40:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:38.365 05:40:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:38.365 05:40:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:38.365 05:40:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:38.365 05:40:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:38.365 05:40:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:38.365 05:40:41 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:38.365 05:40:41 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:38.365 05:40:41 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:38.365 05:40:41 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:38.365 05:40:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:38.365 05:40:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:38.365 05:40:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:38.365 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:38.365 05:40:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:38.365 05:40:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:38.365 05:40:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.365 05:40:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.365 05:40:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:38.365 05:40:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:38.365 05:40:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:38.365 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:38.365 05:40:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:38.365 05:40:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:38.365 05:40:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.365 05:40:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.365 05:40:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:38.365 05:40:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:38.365 05:40:41 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:38.365 05:40:41 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:38.365 05:40:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:38.365 05:40:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.365 05:40:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:38.365 05:40:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.365 05:40:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:38.365 Found net devices under 0000:31:00.0: cvl_0_0 00:25:38.365 05:40:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.365 05:40:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:38.365 05:40:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.365 05:40:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:38.365 05:40:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.365 05:40:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:38.365 Found net devices under 0000:31:00.1: cvl_0_1 00:25:38.365 05:40:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.365 05:40:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:38.365 05:40:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:38.365 05:40:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:38.365 05:40:41 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:38.365 05:40:41 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:38.365 05:40:41 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:38.365 05:40:41 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:38.365 05:40:41 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:38.365 05:40:41 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:38.365 05:40:41 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:38.365 05:40:41 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:38.365 05:40:41 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:38.365 05:40:41 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:38.365 05:40:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:38.365 05:40:41 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:38.365 05:40:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:38.365 05:40:41 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:38.365 05:40:41 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:38.365 05:40:41 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:38.365 05:40:41 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:38.365 05:40:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:38.365 05:40:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:38.626 05:40:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:38.626 05:40:41 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:38.626 05:40:41 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:38.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:38.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:25:38.626 00:25:38.626 --- 10.0.0.2 ping statistics --- 00:25:38.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.626 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:25:38.626 05:40:41 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:38.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:38.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:25:38.626 00:25:38.626 --- 10.0.0.1 ping statistics --- 00:25:38.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.627 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:25:38.627 05:40:41 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:38.627 05:40:41 -- nvmf/common.sh@410 -- # return 0 00:25:38.627 05:40:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:38.627 05:40:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:38.627 05:40:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:38.627 05:40:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:38.627 05:40:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:38.627 05:40:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:38.627 05:40:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:38.627 05:40:41 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:38.627 05:40:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:38.627 05:40:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:38.627 05:40:41 -- common/autotest_common.sh@10 -- # set +x 00:25:38.627 05:40:41 -- nvmf/common.sh@469 -- # nvmfpid=1935658 00:25:38.627 05:40:41 -- nvmf/common.sh@470 -- # waitforlisten 1935658 00:25:38.627 05:40:41 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:38.627 05:40:41 -- common/autotest_common.sh@829 -- # '[' -z 1935658 ']' 00:25:38.627 05:40:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:38.627 05:40:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:38.627 05:40:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:38.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:38.627 05:40:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:38.627 05:40:41 -- common/autotest_common.sh@10 -- # set +x 00:25:38.627 [2024-12-07 05:40:41.832877] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:38.627 [2024-12-07 05:40:41.832942] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:38.888 EAL: No free 2048 kB hugepages reported on node 1 00:25:38.888 [2024-12-07 05:40:41.923080] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:38.888 [2024-12-07 05:40:41.989948] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:38.888 [2024-12-07 05:40:41.990073] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:38.888 [2024-12-07 05:40:41.990082] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:38.888 [2024-12-07 05:40:41.990088] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:38.888 [2024-12-07 05:40:41.990247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:38.888 [2024-12-07 05:40:41.990466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:38.888 [2024-12-07 05:40:41.990620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:38.888 [2024-12-07 05:40:41.990620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:39.458 05:40:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:39.458 05:40:42 -- common/autotest_common.sh@862 -- # return 0 00:25:39.458 05:40:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:39.458 05:40:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:39.458 05:40:42 -- common/autotest_common.sh@10 -- # set +x 00:25:39.458 05:40:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:39.459 05:40:42 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:39.459 05:40:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.459 05:40:42 -- common/autotest_common.sh@10 -- # set +x 00:25:39.459 [2024-12-07 05:40:42.673062] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:39.459 05:40:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.459 05:40:42 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:39.459 05:40:42 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:39.459 05:40:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:39.459 05:40:42 -- common/autotest_common.sh@10 -- # set +x 00:25:39.459 05:40:42 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:39.459 05:40:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:39.459 05:40:42 -- target/shutdown.sh@28 -- # cat 00:25:39.459 05:40:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:39.459 05:40:42 -- target/shutdown.sh@28 -- # cat 00:25:39.719 05:40:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:39.719 05:40:42 -- target/shutdown.sh@28 -- # cat 00:25:39.719 05:40:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:39.719 05:40:42 -- target/shutdown.sh@28 -- # cat 00:25:39.719 05:40:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:39.719 05:40:42 -- target/shutdown.sh@28 -- # cat 00:25:39.719 05:40:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:39.719 05:40:42 -- target/shutdown.sh@28 -- # cat 00:25:39.719 05:40:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:39.719 05:40:42 -- target/shutdown.sh@28 -- # cat 00:25:39.719 05:40:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:39.719 05:40:42 -- target/shutdown.sh@28 -- # cat 00:25:39.719 05:40:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:39.719 05:40:42 -- target/shutdown.sh@28 -- # cat 00:25:39.719 05:40:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:39.719 05:40:42 -- target/shutdown.sh@28 -- # cat 00:25:39.719 05:40:42 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:39.719 05:40:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.719 05:40:42 -- common/autotest_common.sh@10 -- # set +x 00:25:39.719 Malloc1 00:25:39.719 [2024-12-07 05:40:42.771974] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:39.719 Malloc2 00:25:39.719 Malloc3 00:25:39.719 Malloc4 00:25:39.719 Malloc5 00:25:39.719 Malloc6 00:25:39.980 Malloc7 00:25:39.980 Malloc8 00:25:39.980 Malloc9 00:25:39.980 Malloc10 00:25:39.980 05:40:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.980 05:40:43 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:39.980 05:40:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:39.980 05:40:43 -- common/autotest_common.sh@10 -- # set +x 00:25:39.980 05:40:43 -- target/shutdown.sh@102 -- # perfpid=1935863 00:25:39.980 05:40:43 -- target/shutdown.sh@103 -- # waitforlisten 1935863 /var/tmp/bdevperf.sock 00:25:39.980 05:40:43 -- common/autotest_common.sh@829 -- # '[' -z 1935863 ']' 00:25:39.980 05:40:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:39.980 05:40:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:39.980 05:40:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:39.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:39.980 05:40:43 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:39.980 05:40:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:39.980 05:40:43 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:39.980 05:40:43 -- common/autotest_common.sh@10 -- # set +x 00:25:39.980 05:40:43 -- nvmf/common.sh@520 -- # config=() 00:25:39.980 05:40:43 -- nvmf/common.sh@520 -- # local subsystem config 00:25:39.980 05:40:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:39.980 05:40:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:39.980 { 00:25:39.980 "params": { 00:25:39.980 "name": "Nvme$subsystem", 00:25:39.980 "trtype": "$TEST_TRANSPORT", 00:25:39.980 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.980 "adrfam": "ipv4", 00:25:39.980 "trsvcid": "$NVMF_PORT", 00:25:39.980 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.980 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.980 "hdgst": ${hdgst:-false}, 00:25:39.980 "ddgst": ${ddgst:-false} 00:25:39.980 }, 00:25:39.980 "method": "bdev_nvme_attach_controller" 00:25:39.980 } 00:25:39.980 EOF 00:25:39.980 )") 00:25:39.980 05:40:43 -- nvmf/common.sh@542 -- # cat 00:25:39.980 05:40:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:39.980 05:40:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:39.980 { 00:25:39.980 "params": { 00:25:39.980 "name": "Nvme$subsystem", 00:25:39.980 "trtype": "$TEST_TRANSPORT", 00:25:39.980 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.980 "adrfam": "ipv4", 00:25:39.980 "trsvcid": "$NVMF_PORT", 00:25:39.980 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.980 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.980 "hdgst": ${hdgst:-false}, 00:25:39.980 "ddgst": ${ddgst:-false} 00:25:39.980 }, 00:25:39.980 "method": "bdev_nvme_attach_controller" 00:25:39.980 } 00:25:39.980 EOF 00:25:39.980 )") 00:25:39.980 05:40:43 -- nvmf/common.sh@542 -- # cat 00:25:39.980 05:40:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:39.980 05:40:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:39.980 { 00:25:39.980 "params": { 00:25:39.980 "name": "Nvme$subsystem", 00:25:39.980 "trtype": "$TEST_TRANSPORT", 00:25:39.980 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.980 "adrfam": "ipv4", 00:25:39.980 "trsvcid": "$NVMF_PORT", 00:25:39.980 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.980 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.980 "hdgst": ${hdgst:-false}, 00:25:39.980 "ddgst": ${ddgst:-false} 00:25:39.980 }, 00:25:39.980 "method": "bdev_nvme_attach_controller" 00:25:39.980 } 00:25:39.980 EOF 00:25:39.980 )") 00:25:39.980 05:40:43 -- nvmf/common.sh@542 -- # cat 00:25:39.980 05:40:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:39.980 05:40:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:39.980 { 00:25:39.980 "params": { 00:25:39.980 "name": "Nvme$subsystem", 00:25:39.980 "trtype": "$TEST_TRANSPORT", 00:25:39.980 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.980 "adrfam": "ipv4", 00:25:39.980 "trsvcid": "$NVMF_PORT", 00:25:39.980 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.980 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.980 "hdgst": ${hdgst:-false}, 00:25:39.980 "ddgst": ${ddgst:-false} 00:25:39.980 }, 00:25:39.980 "method": "bdev_nvme_attach_controller" 00:25:39.980 } 00:25:39.980 EOF 00:25:39.980 )") 00:25:39.980 05:40:43 -- nvmf/common.sh@542 -- # cat 00:25:39.980 05:40:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:39.980 05:40:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:39.980 { 00:25:39.980 "params": { 00:25:39.980 "name": "Nvme$subsystem", 00:25:39.980 "trtype": "$TEST_TRANSPORT", 00:25:39.980 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.980 "adrfam": "ipv4", 00:25:39.980 "trsvcid": "$NVMF_PORT", 00:25:39.980 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.980 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.980 "hdgst": ${hdgst:-false}, 00:25:39.980 "ddgst": ${ddgst:-false} 00:25:39.980 }, 00:25:39.980 "method": "bdev_nvme_attach_controller" 00:25:39.980 } 00:25:39.980 EOF 00:25:39.980 )") 00:25:39.980 05:40:43 -- nvmf/common.sh@542 -- # cat 00:25:39.980 05:40:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:39.980 05:40:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:39.980 { 00:25:39.980 "params": { 00:25:39.980 "name": "Nvme$subsystem", 00:25:39.980 "trtype": "$TEST_TRANSPORT", 00:25:39.980 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.980 "adrfam": "ipv4", 00:25:39.980 "trsvcid": "$NVMF_PORT", 00:25:39.980 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.980 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.980 "hdgst": ${hdgst:-false}, 00:25:39.980 "ddgst": ${ddgst:-false} 00:25:39.980 }, 00:25:39.980 "method": "bdev_nvme_attach_controller" 00:25:39.980 } 00:25:39.980 EOF 00:25:39.980 )") 00:25:40.244 05:40:43 -- nvmf/common.sh@542 -- # cat 00:25:40.244 05:40:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:40.245 05:40:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:40.245 { 00:25:40.245 "params": { 00:25:40.245 "name": "Nvme$subsystem", 00:25:40.245 "trtype": "$TEST_TRANSPORT", 00:25:40.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:40.245 "adrfam": "ipv4", 00:25:40.245 "trsvcid": "$NVMF_PORT", 00:25:40.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:40.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:40.245 "hdgst": ${hdgst:-false}, 00:25:40.245 "ddgst": ${ddgst:-false} 00:25:40.245 }, 00:25:40.245 "method": "bdev_nvme_attach_controller" 00:25:40.245 } 00:25:40.245 EOF 00:25:40.245 )") 00:25:40.245 [2024-12-07 05:40:43.224768] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:40.245 [2024-12-07 05:40:43.224851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1935863 ] 00:25:40.245 05:40:43 -- nvmf/common.sh@542 -- # cat 00:25:40.245 05:40:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:40.245 05:40:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:40.245 { 00:25:40.245 "params": { 00:25:40.245 "name": "Nvme$subsystem", 00:25:40.245 "trtype": "$TEST_TRANSPORT", 00:25:40.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:40.245 "adrfam": "ipv4", 00:25:40.245 "trsvcid": "$NVMF_PORT", 00:25:40.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:40.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:40.245 "hdgst": ${hdgst:-false}, 00:25:40.245 "ddgst": ${ddgst:-false} 00:25:40.245 }, 00:25:40.245 "method": "bdev_nvme_attach_controller" 00:25:40.245 } 00:25:40.245 EOF 00:25:40.245 )") 00:25:40.245 05:40:43 -- nvmf/common.sh@542 -- # cat 00:25:40.245 05:40:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:40.245 05:40:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:40.245 { 00:25:40.245 "params": { 00:25:40.245 "name": "Nvme$subsystem", 00:25:40.245 "trtype": "$TEST_TRANSPORT", 00:25:40.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:40.245 "adrfam": "ipv4", 00:25:40.245 "trsvcid": "$NVMF_PORT", 00:25:40.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:40.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:40.245 "hdgst": ${hdgst:-false}, 00:25:40.245 "ddgst": ${ddgst:-false} 00:25:40.245 }, 00:25:40.245 "method": "bdev_nvme_attach_controller" 00:25:40.245 } 00:25:40.245 EOF 00:25:40.245 )") 00:25:40.245 05:40:43 -- nvmf/common.sh@542 -- # cat 00:25:40.245 05:40:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:40.245 05:40:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:40.245 { 00:25:40.245 "params": { 00:25:40.245 "name": "Nvme$subsystem", 00:25:40.245 "trtype": "$TEST_TRANSPORT", 00:25:40.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:40.245 "adrfam": "ipv4", 00:25:40.245 "trsvcid": "$NVMF_PORT", 00:25:40.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:40.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:40.245 "hdgst": ${hdgst:-false}, 00:25:40.245 "ddgst": ${ddgst:-false} 00:25:40.245 }, 00:25:40.245 "method": "bdev_nvme_attach_controller" 00:25:40.245 } 00:25:40.245 EOF 00:25:40.245 )") 00:25:40.245 05:40:43 -- nvmf/common.sh@542 -- # cat 00:25:40.245 05:40:43 -- nvmf/common.sh@544 -- # jq . 00:25:40.245 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.245 05:40:43 -- nvmf/common.sh@545 -- # IFS=, 00:25:40.245 05:40:43 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:40.245 "params": { 00:25:40.245 "name": "Nvme1", 00:25:40.245 "trtype": "tcp", 00:25:40.245 "traddr": "10.0.0.2", 00:25:40.245 "adrfam": "ipv4", 00:25:40.245 "trsvcid": "4420", 00:25:40.245 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:40.245 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:40.245 "hdgst": false, 00:25:40.245 "ddgst": false 00:25:40.245 }, 00:25:40.245 "method": "bdev_nvme_attach_controller" 00:25:40.245 },{ 00:25:40.245 "params": { 00:25:40.245 "name": "Nvme2", 00:25:40.245 "trtype": "tcp", 00:25:40.245 "traddr": "10.0.0.2", 00:25:40.245 "adrfam": "ipv4", 00:25:40.245 "trsvcid": "4420", 00:25:40.245 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:40.245 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:40.245 "hdgst": false, 00:25:40.245 "ddgst": false 00:25:40.245 }, 00:25:40.245 "method": "bdev_nvme_attach_controller" 00:25:40.246 },{ 00:25:40.246 "params": { 00:25:40.246 "name": "Nvme3", 00:25:40.246 "trtype": "tcp", 00:25:40.246 "traddr": "10.0.0.2", 00:25:40.246 "adrfam": "ipv4", 00:25:40.246 "trsvcid": "4420", 00:25:40.246 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:40.246 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:40.246 "hdgst": false, 00:25:40.246 "ddgst": false 00:25:40.246 }, 00:25:40.246 "method": "bdev_nvme_attach_controller" 00:25:40.246 },{ 00:25:40.246 "params": { 00:25:40.246 "name": "Nvme4", 00:25:40.246 "trtype": "tcp", 00:25:40.246 "traddr": "10.0.0.2", 00:25:40.246 "adrfam": "ipv4", 00:25:40.246 "trsvcid": "4420", 00:25:40.246 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:40.246 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:40.246 "hdgst": false, 00:25:40.246 "ddgst": false 00:25:40.246 }, 00:25:40.246 "method": "bdev_nvme_attach_controller" 00:25:40.246 },{ 00:25:40.246 "params": { 00:25:40.246 "name": "Nvme5", 00:25:40.246 "trtype": "tcp", 00:25:40.246 "traddr": "10.0.0.2", 00:25:40.246 "adrfam": "ipv4", 00:25:40.246 "trsvcid": "4420", 00:25:40.246 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:40.246 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:40.246 "hdgst": false, 00:25:40.246 "ddgst": false 00:25:40.246 }, 00:25:40.246 "method": "bdev_nvme_attach_controller" 00:25:40.246 },{ 00:25:40.246 "params": { 00:25:40.246 "name": "Nvme6", 00:25:40.246 "trtype": "tcp", 00:25:40.246 "traddr": "10.0.0.2", 00:25:40.246 "adrfam": "ipv4", 00:25:40.246 "trsvcid": "4420", 00:25:40.246 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:40.246 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:40.246 "hdgst": false, 00:25:40.246 "ddgst": false 00:25:40.246 }, 00:25:40.246 "method": "bdev_nvme_attach_controller" 00:25:40.246 },{ 00:25:40.246 "params": { 00:25:40.246 "name": "Nvme7", 00:25:40.246 "trtype": "tcp", 00:25:40.246 "traddr": "10.0.0.2", 00:25:40.246 "adrfam": "ipv4", 00:25:40.246 "trsvcid": "4420", 00:25:40.246 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:40.246 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:40.246 "hdgst": false, 00:25:40.246 "ddgst": false 00:25:40.246 }, 00:25:40.246 "method": "bdev_nvme_attach_controller" 00:25:40.246 },{ 00:25:40.246 "params": { 00:25:40.246 "name": "Nvme8", 00:25:40.246 "trtype": "tcp", 00:25:40.246 "traddr": "10.0.0.2", 00:25:40.246 "adrfam": "ipv4", 00:25:40.246 "trsvcid": "4420", 00:25:40.246 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:40.246 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:40.246 "hdgst": false, 00:25:40.246 "ddgst": false 00:25:40.246 }, 00:25:40.247 "method": "bdev_nvme_attach_controller" 00:25:40.247 },{ 00:25:40.247 "params": { 00:25:40.247 "name": "Nvme9", 00:25:40.247 "trtype": "tcp", 00:25:40.247 "traddr": "10.0.0.2", 00:25:40.247 "adrfam": "ipv4", 00:25:40.247 "trsvcid": "4420", 00:25:40.247 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:40.247 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:40.247 "hdgst": false, 00:25:40.247 "ddgst": false 00:25:40.247 }, 00:25:40.247 "method": "bdev_nvme_attach_controller" 00:25:40.247 },{ 00:25:40.247 "params": { 00:25:40.247 "name": "Nvme10", 00:25:40.247 "trtype": "tcp", 00:25:40.247 "traddr": "10.0.0.2", 00:25:40.247 "adrfam": "ipv4", 00:25:40.247 "trsvcid": "4420", 00:25:40.247 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:40.247 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:40.247 "hdgst": false, 00:25:40.247 "ddgst": false 00:25:40.247 }, 00:25:40.247 "method": "bdev_nvme_attach_controller" 00:25:40.247 }' 00:25:40.247 [2024-12-07 05:40:43.290062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.247 [2024-12-07 05:40:43.352969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.632 Running I/O for 10 seconds... 00:25:41.632 05:40:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:41.632 05:40:44 -- common/autotest_common.sh@862 -- # return 0 00:25:41.632 05:40:44 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:41.632 05:40:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.632 05:40:44 -- common/autotest_common.sh@10 -- # set +x 00:25:41.632 05:40:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.632 05:40:44 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:41.632 05:40:44 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:41.632 05:40:44 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:41.632 05:40:44 -- target/shutdown.sh@57 -- # local ret=1 00:25:41.632 05:40:44 -- target/shutdown.sh@58 -- # local i 00:25:41.632 05:40:44 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:41.632 05:40:44 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:41.632 05:40:44 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:41.632 05:40:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.632 05:40:44 -- common/autotest_common.sh@10 -- # set +x 00:25:41.632 05:40:44 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:41.632 05:40:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.632 05:40:44 -- target/shutdown.sh@60 -- # read_io_count=42 00:25:41.632 05:40:44 -- target/shutdown.sh@63 -- # '[' 42 -ge 100 ']' 00:25:41.632 05:40:44 -- target/shutdown.sh@67 -- # sleep 0.25 00:25:41.892 05:40:45 -- target/shutdown.sh@59 -- # (( i-- )) 00:25:41.892 05:40:45 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:41.892 05:40:45 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:41.892 05:40:45 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:41.892 05:40:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.892 05:40:45 -- common/autotest_common.sh@10 -- # set +x 00:25:41.892 05:40:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.892 05:40:45 -- target/shutdown.sh@60 -- # read_io_count=167 00:25:41.892 05:40:45 -- target/shutdown.sh@63 -- # '[' 167 -ge 100 ']' 00:25:41.892 05:40:45 -- target/shutdown.sh@64 -- # ret=0 00:25:41.892 05:40:45 -- target/shutdown.sh@65 -- # break 00:25:41.892 05:40:45 -- target/shutdown.sh@69 -- # return 0 00:25:41.892 05:40:45 -- target/shutdown.sh@109 -- # killprocess 1935863 00:25:41.892 05:40:45 -- common/autotest_common.sh@936 -- # '[' -z 1935863 ']' 00:25:41.892 05:40:45 -- common/autotest_common.sh@940 -- # kill -0 1935863 00:25:41.892 05:40:45 -- common/autotest_common.sh@941 -- # uname 00:25:41.892 05:40:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:41.892 05:40:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1935863 00:25:41.892 05:40:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:41.892 05:40:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:41.892 05:40:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1935863' 00:25:41.892 killing process with pid 1935863 00:25:41.892 05:40:45 -- common/autotest_common.sh@955 -- # kill 1935863 00:25:41.892 05:40:45 -- common/autotest_common.sh@960 -- # wait 1935863 00:25:42.151 Received shutdown signal, test time was about 0.652818 seconds 00:25:42.151 00:25:42.151 Latency(us) 00:25:42.151 [2024-12-07T04:40:45.391Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.151 [2024-12-07T04:40:45.391Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.151 Verification LBA range: start 0x0 length 0x400 00:25:42.151 Nvme1n1 : 0.64 428.28 26.77 0.00 0.00 144663.29 14308.69 163403.09 00:25:42.151 [2024-12-07T04:40:45.391Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.151 Verification LBA range: start 0x0 length 0x400 00:25:42.151 Nvme2n1 : 0.62 435.74 27.23 0.00 0.00 140807.83 19223.89 127576.75 00:25:42.151 [2024-12-07T04:40:45.391Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.151 Verification LBA range: start 0x0 length 0x400 00:25:42.151 Nvme3n1 : 0.62 441.96 27.62 0.00 0.00 137072.84 18459.31 139810.13 00:25:42.151 [2024-12-07T04:40:45.391Z] Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.151 Verification LBA range: start 0x0 length 0x400 00:25:42.151 Nvme4n1 : 0.62 439.82 27.49 0.00 0.00 135860.10 18350.08 142431.57 00:25:42.151 [2024-12-07T04:40:45.391Z] Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.151 Verification LBA range: start 0x0 length 0x400 00:25:42.151 Nvme5n1 : 0.65 417.22 26.08 0.00 0.00 133149.11 17803.95 132819.63 00:25:42.151 [2024-12-07T04:40:45.391Z] Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.151 Verification LBA range: start 0x0 length 0x400 00:25:42.151 Nvme6n1 : 0.63 437.94 27.37 0.00 0.00 132365.33 6144.00 116217.17 00:25:42.151 [2024-12-07T04:40:45.391Z] Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.151 Verification LBA range: start 0x0 length 0x400 00:25:42.151 Nvme7n1 : 0.62 438.86 27.43 0.00 0.00 130208.83 21517.65 114469.55 00:25:42.151 [2024-12-07T04:40:45.391Z] Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.151 Verification LBA range: start 0x0 length 0x400 00:25:42.151 Nvme8n1 : 0.61 448.24 28.01 0.00 0.00 126262.84 7755.09 113595.73 00:25:42.151 [2024-12-07T04:40:45.391Z] Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.151 Verification LBA range: start 0x0 length 0x400 00:25:42.151 Nvme9n1 : 0.62 437.12 27.32 0.00 0.00 127387.01 19660.80 104420.69 00:25:42.151 [2024-12-07T04:40:45.391Z] Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.151 Verification LBA range: start 0x0 length 0x400 00:25:42.151 Nvme10n1 : 0.63 432.96 27.06 0.00 0.00 127236.42 7973.55 114469.55 00:25:42.151 [2024-12-07T04:40:45.391Z] =================================================================================================================== 00:25:42.151 [2024-12-07T04:40:45.391Z] Total : 4358.14 272.38 0.00 0.00 133495.21 6144.00 163403.09 00:25:42.151 05:40:45 -- target/shutdown.sh@112 -- # sleep 1 00:25:43.531 05:40:46 -- target/shutdown.sh@113 -- # kill -0 1935658 00:25:43.531 05:40:46 -- target/shutdown.sh@115 -- # stoptarget 00:25:43.531 05:40:46 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:43.531 05:40:46 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:43.531 05:40:46 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:43.531 05:40:46 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:43.531 05:40:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:43.531 05:40:46 -- nvmf/common.sh@116 -- # sync 00:25:43.531 05:40:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:43.531 05:40:46 -- nvmf/common.sh@119 -- # set +e 00:25:43.531 05:40:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:43.531 05:40:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:43.531 rmmod nvme_tcp 00:25:43.531 rmmod nvme_fabrics 00:25:43.531 rmmod nvme_keyring 00:25:43.531 05:40:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:43.531 05:40:46 -- nvmf/common.sh@123 -- # set -e 00:25:43.531 05:40:46 -- nvmf/common.sh@124 -- # return 0 00:25:43.531 05:40:46 -- nvmf/common.sh@477 -- # '[' -n 1935658 ']' 00:25:43.531 05:40:46 -- nvmf/common.sh@478 -- # killprocess 1935658 00:25:43.531 05:40:46 -- common/autotest_common.sh@936 -- # '[' -z 1935658 ']' 00:25:43.531 05:40:46 -- common/autotest_common.sh@940 -- # kill -0 1935658 00:25:43.531 05:40:46 -- common/autotest_common.sh@941 -- # uname 00:25:43.531 05:40:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:43.531 05:40:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1935658 00:25:43.531 05:40:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:43.531 05:40:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:43.531 05:40:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1935658' 00:25:43.531 killing process with pid 1935658 00:25:43.531 05:40:46 -- common/autotest_common.sh@955 -- # kill 1935658 00:25:43.531 05:40:46 -- common/autotest_common.sh@960 -- # wait 1935658 00:25:43.531 05:40:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:43.531 05:40:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:43.531 05:40:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:43.531 05:40:46 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:43.531 05:40:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:43.531 05:40:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.531 05:40:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:43.531 05:40:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.070 05:40:48 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:46.070 00:25:46.070 real 0m7.411s 00:25:46.070 user 0m21.387s 00:25:46.070 sys 0m1.202s 00:25:46.070 05:40:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:46.070 05:40:48 -- common/autotest_common.sh@10 -- # set +x 00:25:46.070 ************************************ 00:25:46.070 END TEST nvmf_shutdown_tc2 00:25:46.070 ************************************ 00:25:46.070 05:40:48 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:25:46.070 05:40:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:46.070 05:40:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:46.070 05:40:48 -- common/autotest_common.sh@10 -- # set +x 00:25:46.070 ************************************ 00:25:46.070 START TEST nvmf_shutdown_tc3 00:25:46.070 ************************************ 00:25:46.070 05:40:48 -- common/autotest_common.sh@1114 -- # nvmf_shutdown_tc3 00:25:46.070 05:40:48 -- target/shutdown.sh@120 -- # starttarget 00:25:46.070 05:40:48 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:46.070 05:40:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:46.070 05:40:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:46.070 05:40:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:46.070 05:40:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:46.070 05:40:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:46.070 05:40:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.070 05:40:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:46.070 05:40:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.070 05:40:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:46.070 05:40:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:46.070 05:40:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:46.070 05:40:48 -- common/autotest_common.sh@10 -- # set +x 00:25:46.070 05:40:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:46.070 05:40:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:46.070 05:40:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:46.070 05:40:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:46.070 05:40:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:46.070 05:40:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:46.070 05:40:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:46.070 05:40:48 -- nvmf/common.sh@294 -- # net_devs=() 00:25:46.070 05:40:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:46.070 05:40:48 -- nvmf/common.sh@295 -- # e810=() 00:25:46.070 05:40:48 -- nvmf/common.sh@295 -- # local -ga e810 00:25:46.070 05:40:48 -- nvmf/common.sh@296 -- # x722=() 00:25:46.070 05:40:48 -- nvmf/common.sh@296 -- # local -ga x722 00:25:46.070 05:40:48 -- nvmf/common.sh@297 -- # mlx=() 00:25:46.070 05:40:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:46.070 05:40:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:46.071 05:40:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:46.071 05:40:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:46.071 05:40:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:46.071 05:40:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:46.071 05:40:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:46.071 05:40:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:46.071 05:40:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:46.071 05:40:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:46.071 05:40:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:46.071 05:40:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:46.071 05:40:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:46.071 05:40:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:46.071 05:40:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:46.071 05:40:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:46.071 05:40:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:46.071 05:40:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:46.071 05:40:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:46.071 05:40:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:46.071 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:46.071 05:40:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:46.071 05:40:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:46.071 05:40:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.071 05:40:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.071 05:40:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:46.071 05:40:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:46.071 05:40:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:46.071 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:46.071 05:40:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:46.071 05:40:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:46.071 05:40:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.071 05:40:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.071 05:40:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:46.071 05:40:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:46.071 05:40:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:46.071 05:40:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:46.071 05:40:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:46.071 05:40:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.071 05:40:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:46.071 05:40:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.071 05:40:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:46.071 Found net devices under 0000:31:00.0: cvl_0_0 00:25:46.071 05:40:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.071 05:40:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:46.071 05:40:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.071 05:40:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:46.071 05:40:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.071 05:40:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:46.071 Found net devices under 0000:31:00.1: cvl_0_1 00:25:46.071 05:40:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.071 05:40:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:46.071 05:40:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:46.071 05:40:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:46.071 05:40:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:46.071 05:40:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:46.071 05:40:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:46.071 05:40:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:46.071 05:40:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:46.071 05:40:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:46.071 05:40:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:46.071 05:40:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:46.071 05:40:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:46.071 05:40:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:46.071 05:40:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:46.071 05:40:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:46.071 05:40:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:46.071 05:40:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:46.071 05:40:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:46.071 05:40:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:46.071 05:40:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:46.071 05:40:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:46.071 05:40:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:46.071 05:40:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:46.071 05:40:49 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:46.071 05:40:49 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:46.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:46.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.571 ms 00:25:46.071 00:25:46.071 --- 10.0.0.2 ping statistics --- 00:25:46.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.071 rtt min/avg/max/mdev = 0.571/0.571/0.571/0.000 ms 00:25:46.071 05:40:49 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:46.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:46.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:25:46.071 00:25:46.071 --- 10.0.0.1 ping statistics --- 00:25:46.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.071 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:25:46.071 05:40:49 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:46.071 05:40:49 -- nvmf/common.sh@410 -- # return 0 00:25:46.071 05:40:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:46.071 05:40:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:46.071 05:40:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:46.071 05:40:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:46.071 05:40:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:46.071 05:40:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:46.071 05:40:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:46.071 05:40:49 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:46.071 05:40:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:46.071 05:40:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:46.071 05:40:49 -- common/autotest_common.sh@10 -- # set +x 00:25:46.071 05:40:49 -- nvmf/common.sh@469 -- # nvmfpid=1937258 00:25:46.071 05:40:49 -- nvmf/common.sh@470 -- # waitforlisten 1937258 00:25:46.071 05:40:49 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:46.071 05:40:49 -- common/autotest_common.sh@829 -- # '[' -z 1937258 ']' 00:25:46.071 05:40:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.071 05:40:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:46.071 05:40:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.071 05:40:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:46.071 05:40:49 -- common/autotest_common.sh@10 -- # set +x 00:25:46.071 [2024-12-07 05:40:49.286247] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:46.072 [2024-12-07 05:40:49.286345] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:46.330 EAL: No free 2048 kB hugepages reported on node 1 00:25:46.330 [2024-12-07 05:40:49.351649] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:46.330 [2024-12-07 05:40:49.404332] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:46.330 [2024-12-07 05:40:49.404432] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:46.330 [2024-12-07 05:40:49.404438] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:46.330 [2024-12-07 05:40:49.404443] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:46.330 [2024-12-07 05:40:49.404551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:46.330 [2024-12-07 05:40:49.404708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:46.330 [2024-12-07 05:40:49.404832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:46.330 [2024-12-07 05:40:49.404835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:46.921 05:40:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:46.921 05:40:50 -- common/autotest_common.sh@862 -- # return 0 00:25:46.921 05:40:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:46.921 05:40:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:46.921 05:40:50 -- common/autotest_common.sh@10 -- # set +x 00:25:46.921 05:40:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:46.921 05:40:50 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:46.921 05:40:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.921 05:40:50 -- common/autotest_common.sh@10 -- # set +x 00:25:46.921 [2024-12-07 05:40:50.144270] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:46.921 05:40:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.921 05:40:50 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:46.921 05:40:50 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:46.921 05:40:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:46.921 05:40:50 -- common/autotest_common.sh@10 -- # set +x 00:25:46.921 05:40:50 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:47.180 05:40:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:47.180 05:40:50 -- target/shutdown.sh@28 -- # cat 00:25:47.180 05:40:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:47.180 05:40:50 -- target/shutdown.sh@28 -- # cat 00:25:47.180 05:40:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:47.180 05:40:50 -- target/shutdown.sh@28 -- # cat 00:25:47.180 05:40:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:47.180 05:40:50 -- target/shutdown.sh@28 -- # cat 00:25:47.180 05:40:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:47.180 05:40:50 -- target/shutdown.sh@28 -- # cat 00:25:47.180 05:40:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:47.180 05:40:50 -- target/shutdown.sh@28 -- # cat 00:25:47.180 05:40:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:47.180 05:40:50 -- target/shutdown.sh@28 -- # cat 00:25:47.180 05:40:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:47.180 05:40:50 -- target/shutdown.sh@28 -- # cat 00:25:47.180 05:40:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:47.180 05:40:50 -- target/shutdown.sh@28 -- # cat 00:25:47.180 05:40:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:47.180 05:40:50 -- target/shutdown.sh@28 -- # cat 00:25:47.180 05:40:50 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:47.180 05:40:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.180 05:40:50 -- common/autotest_common.sh@10 -- # set +x 00:25:47.180 Malloc1 00:25:47.180 [2024-12-07 05:40:50.242812] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:47.180 Malloc2 00:25:47.180 Malloc3 00:25:47.180 Malloc4 00:25:47.180 Malloc5 00:25:47.180 Malloc6 00:25:47.441 Malloc7 00:25:47.441 Malloc8 00:25:47.441 Malloc9 00:25:47.441 Malloc10 00:25:47.441 05:40:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.441 05:40:50 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:47.441 05:40:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:47.441 05:40:50 -- common/autotest_common.sh@10 -- # set +x 00:25:47.441 05:40:50 -- target/shutdown.sh@124 -- # perfpid=1937564 00:25:47.441 05:40:50 -- target/shutdown.sh@125 -- # waitforlisten 1937564 /var/tmp/bdevperf.sock 00:25:47.441 05:40:50 -- common/autotest_common.sh@829 -- # '[' -z 1937564 ']' 00:25:47.441 05:40:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:47.441 05:40:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:47.441 05:40:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:47.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:47.441 05:40:50 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:47.441 05:40:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:47.441 05:40:50 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:47.441 05:40:50 -- common/autotest_common.sh@10 -- # set +x 00:25:47.441 05:40:50 -- nvmf/common.sh@520 -- # config=() 00:25:47.441 05:40:50 -- nvmf/common.sh@520 -- # local subsystem config 00:25:47.441 05:40:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:47.441 05:40:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:47.441 { 00:25:47.441 "params": { 00:25:47.441 "name": "Nvme$subsystem", 00:25:47.441 "trtype": "$TEST_TRANSPORT", 00:25:47.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.441 "adrfam": "ipv4", 00:25:47.441 "trsvcid": "$NVMF_PORT", 00:25:47.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.441 "hdgst": ${hdgst:-false}, 00:25:47.441 "ddgst": ${ddgst:-false} 00:25:47.441 }, 00:25:47.441 "method": "bdev_nvme_attach_controller" 00:25:47.441 } 00:25:47.441 EOF 00:25:47.441 )") 00:25:47.441 05:40:50 -- nvmf/common.sh@542 -- # cat 00:25:47.441 05:40:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:47.441 05:40:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:47.441 { 00:25:47.441 "params": { 00:25:47.441 "name": "Nvme$subsystem", 00:25:47.441 "trtype": "$TEST_TRANSPORT", 00:25:47.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.441 "adrfam": "ipv4", 00:25:47.441 "trsvcid": "$NVMF_PORT", 00:25:47.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.441 "hdgst": ${hdgst:-false}, 00:25:47.441 "ddgst": ${ddgst:-false} 00:25:47.441 }, 00:25:47.441 "method": "bdev_nvme_attach_controller" 00:25:47.441 } 00:25:47.441 EOF 00:25:47.441 )") 00:25:47.441 05:40:50 -- nvmf/common.sh@542 -- # cat 00:25:47.441 05:40:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:47.441 05:40:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:47.441 { 00:25:47.441 "params": { 00:25:47.441 "name": "Nvme$subsystem", 00:25:47.441 "trtype": "$TEST_TRANSPORT", 00:25:47.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.441 "adrfam": "ipv4", 00:25:47.441 "trsvcid": "$NVMF_PORT", 00:25:47.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.441 "hdgst": ${hdgst:-false}, 00:25:47.441 "ddgst": ${ddgst:-false} 00:25:47.441 }, 00:25:47.441 "method": "bdev_nvme_attach_controller" 00:25:47.441 } 00:25:47.441 EOF 00:25:47.441 )") 00:25:47.441 05:40:50 -- nvmf/common.sh@542 -- # cat 00:25:47.441 05:40:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:47.441 05:40:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:47.441 { 00:25:47.441 "params": { 00:25:47.441 "name": "Nvme$subsystem", 00:25:47.441 "trtype": "$TEST_TRANSPORT", 00:25:47.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.442 "adrfam": "ipv4", 00:25:47.442 "trsvcid": "$NVMF_PORT", 00:25:47.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.442 "hdgst": ${hdgst:-false}, 00:25:47.442 "ddgst": ${ddgst:-false} 00:25:47.442 }, 00:25:47.442 "method": "bdev_nvme_attach_controller" 00:25:47.442 } 00:25:47.442 EOF 00:25:47.442 )") 00:25:47.442 05:40:50 -- nvmf/common.sh@542 -- # cat 00:25:47.442 05:40:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:47.442 05:40:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:47.442 { 00:25:47.442 "params": { 00:25:47.442 "name": "Nvme$subsystem", 00:25:47.442 "trtype": "$TEST_TRANSPORT", 00:25:47.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.442 "adrfam": "ipv4", 00:25:47.442 "trsvcid": "$NVMF_PORT", 00:25:47.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.442 "hdgst": ${hdgst:-false}, 00:25:47.442 "ddgst": ${ddgst:-false} 00:25:47.442 }, 00:25:47.442 "method": "bdev_nvme_attach_controller" 00:25:47.442 } 00:25:47.442 EOF 00:25:47.442 )") 00:25:47.702 05:40:50 -- nvmf/common.sh@542 -- # cat 00:25:47.702 05:40:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:47.702 05:40:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:47.702 { 00:25:47.702 "params": { 00:25:47.702 "name": "Nvme$subsystem", 00:25:47.702 "trtype": "$TEST_TRANSPORT", 00:25:47.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.702 "adrfam": "ipv4", 00:25:47.702 "trsvcid": "$NVMF_PORT", 00:25:47.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.702 "hdgst": ${hdgst:-false}, 00:25:47.702 "ddgst": ${ddgst:-false} 00:25:47.702 }, 00:25:47.702 "method": "bdev_nvme_attach_controller" 00:25:47.702 } 00:25:47.702 EOF 00:25:47.702 )") 00:25:47.702 05:40:50 -- nvmf/common.sh@542 -- # cat 00:25:47.702 05:40:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:47.702 05:40:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:47.702 { 00:25:47.702 "params": { 00:25:47.702 "name": "Nvme$subsystem", 00:25:47.702 "trtype": "$TEST_TRANSPORT", 00:25:47.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.702 "adrfam": "ipv4", 00:25:47.702 "trsvcid": "$NVMF_PORT", 00:25:47.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.702 "hdgst": ${hdgst:-false}, 00:25:47.702 "ddgst": ${ddgst:-false} 00:25:47.702 }, 00:25:47.702 "method": "bdev_nvme_attach_controller" 00:25:47.702 } 00:25:47.702 EOF 00:25:47.702 )") 00:25:47.702 05:40:50 -- nvmf/common.sh@542 -- # cat 00:25:47.702 [2024-12-07 05:40:50.697189] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:47.702 [2024-12-07 05:40:50.697258] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1937564 ] 00:25:47.702 05:40:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:47.702 05:40:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:47.702 { 00:25:47.702 "params": { 00:25:47.702 "name": "Nvme$subsystem", 00:25:47.702 "trtype": "$TEST_TRANSPORT", 00:25:47.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.702 "adrfam": "ipv4", 00:25:47.702 "trsvcid": "$NVMF_PORT", 00:25:47.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.702 "hdgst": ${hdgst:-false}, 00:25:47.702 "ddgst": ${ddgst:-false} 00:25:47.702 }, 00:25:47.702 "method": "bdev_nvme_attach_controller" 00:25:47.702 } 00:25:47.702 EOF 00:25:47.702 )") 00:25:47.702 05:40:50 -- nvmf/common.sh@542 -- # cat 00:25:47.702 05:40:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:47.702 05:40:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:47.702 { 00:25:47.702 "params": { 00:25:47.702 "name": "Nvme$subsystem", 00:25:47.702 "trtype": "$TEST_TRANSPORT", 00:25:47.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.702 "adrfam": "ipv4", 00:25:47.702 "trsvcid": "$NVMF_PORT", 00:25:47.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.702 "hdgst": ${hdgst:-false}, 00:25:47.702 "ddgst": ${ddgst:-false} 00:25:47.702 }, 00:25:47.702 "method": "bdev_nvme_attach_controller" 00:25:47.702 } 00:25:47.702 EOF 00:25:47.702 )") 00:25:47.702 05:40:50 -- nvmf/common.sh@542 -- # cat 00:25:47.702 05:40:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:47.702 05:40:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:47.702 { 00:25:47.702 "params": { 00:25:47.702 "name": "Nvme$subsystem", 00:25:47.702 "trtype": "$TEST_TRANSPORT", 00:25:47.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.702 "adrfam": "ipv4", 00:25:47.702 "trsvcid": "$NVMF_PORT", 00:25:47.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.702 "hdgst": ${hdgst:-false}, 00:25:47.702 "ddgst": ${ddgst:-false} 00:25:47.702 }, 00:25:47.702 "method": "bdev_nvme_attach_controller" 00:25:47.702 } 00:25:47.702 EOF 00:25:47.702 )") 00:25:47.702 05:40:50 -- nvmf/common.sh@542 -- # cat 00:25:47.702 05:40:50 -- nvmf/common.sh@544 -- # jq . 00:25:47.702 EAL: No free 2048 kB hugepages reported on node 1 00:25:47.702 05:40:50 -- nvmf/common.sh@545 -- # IFS=, 00:25:47.702 05:40:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:47.702 "params": { 00:25:47.702 "name": "Nvme1", 00:25:47.702 "trtype": "tcp", 00:25:47.703 "traddr": "10.0.0.2", 00:25:47.703 "adrfam": "ipv4", 00:25:47.703 "trsvcid": "4420", 00:25:47.703 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:47.703 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:47.703 "hdgst": false, 00:25:47.703 "ddgst": false 00:25:47.703 }, 00:25:47.703 "method": "bdev_nvme_attach_controller" 00:25:47.703 },{ 00:25:47.703 "params": { 00:25:47.703 "name": "Nvme2", 00:25:47.703 "trtype": "tcp", 00:25:47.703 "traddr": "10.0.0.2", 00:25:47.703 "adrfam": "ipv4", 00:25:47.703 "trsvcid": "4420", 00:25:47.703 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:47.703 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:47.703 "hdgst": false, 00:25:47.703 "ddgst": false 00:25:47.703 }, 00:25:47.703 "method": "bdev_nvme_attach_controller" 00:25:47.703 },{ 00:25:47.703 "params": { 00:25:47.703 "name": "Nvme3", 00:25:47.703 "trtype": "tcp", 00:25:47.703 "traddr": "10.0.0.2", 00:25:47.703 "adrfam": "ipv4", 00:25:47.703 "trsvcid": "4420", 00:25:47.703 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:47.703 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:47.703 "hdgst": false, 00:25:47.703 "ddgst": false 00:25:47.703 }, 00:25:47.703 "method": "bdev_nvme_attach_controller" 00:25:47.703 },{ 00:25:47.703 "params": { 00:25:47.703 "name": "Nvme4", 00:25:47.703 "trtype": "tcp", 00:25:47.703 "traddr": "10.0.0.2", 00:25:47.703 "adrfam": "ipv4", 00:25:47.703 "trsvcid": "4420", 00:25:47.703 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:47.703 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:47.703 "hdgst": false, 00:25:47.703 "ddgst": false 00:25:47.703 }, 00:25:47.703 "method": "bdev_nvme_attach_controller" 00:25:47.703 },{ 00:25:47.703 "params": { 00:25:47.703 "name": "Nvme5", 00:25:47.703 "trtype": "tcp", 00:25:47.703 "traddr": "10.0.0.2", 00:25:47.703 "adrfam": "ipv4", 00:25:47.703 "trsvcid": "4420", 00:25:47.703 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:47.703 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:47.703 "hdgst": false, 00:25:47.703 "ddgst": false 00:25:47.703 }, 00:25:47.703 "method": "bdev_nvme_attach_controller" 00:25:47.703 },{ 00:25:47.703 "params": { 00:25:47.703 "name": "Nvme6", 00:25:47.703 "trtype": "tcp", 00:25:47.703 "traddr": "10.0.0.2", 00:25:47.703 "adrfam": "ipv4", 00:25:47.703 "trsvcid": "4420", 00:25:47.703 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:47.703 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:47.703 "hdgst": false, 00:25:47.703 "ddgst": false 00:25:47.703 }, 00:25:47.703 "method": "bdev_nvme_attach_controller" 00:25:47.703 },{ 00:25:47.703 "params": { 00:25:47.703 "name": "Nvme7", 00:25:47.703 "trtype": "tcp", 00:25:47.703 "traddr": "10.0.0.2", 00:25:47.703 "adrfam": "ipv4", 00:25:47.703 "trsvcid": "4420", 00:25:47.703 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:47.703 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:47.703 "hdgst": false, 00:25:47.703 "ddgst": false 00:25:47.703 }, 00:25:47.703 "method": "bdev_nvme_attach_controller" 00:25:47.703 },{ 00:25:47.703 "params": { 00:25:47.703 "name": "Nvme8", 00:25:47.703 "trtype": "tcp", 00:25:47.703 "traddr": "10.0.0.2", 00:25:47.703 "adrfam": "ipv4", 00:25:47.703 "trsvcid": "4420", 00:25:47.703 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:47.703 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:47.703 "hdgst": false, 00:25:47.703 "ddgst": false 00:25:47.703 }, 00:25:47.703 "method": "bdev_nvme_attach_controller" 00:25:47.703 },{ 00:25:47.703 "params": { 00:25:47.703 "name": "Nvme9", 00:25:47.703 "trtype": "tcp", 00:25:47.703 "traddr": "10.0.0.2", 00:25:47.703 "adrfam": "ipv4", 00:25:47.703 "trsvcid": "4420", 00:25:47.703 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:47.703 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:47.703 "hdgst": false, 00:25:47.703 "ddgst": false 00:25:47.703 }, 00:25:47.703 "method": "bdev_nvme_attach_controller" 00:25:47.703 },{ 00:25:47.703 "params": { 00:25:47.703 "name": "Nvme10", 00:25:47.703 "trtype": "tcp", 00:25:47.703 "traddr": "10.0.0.2", 00:25:47.703 "adrfam": "ipv4", 00:25:47.703 "trsvcid": "4420", 00:25:47.703 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:47.703 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:47.703 "hdgst": false, 00:25:47.703 "ddgst": false 00:25:47.703 }, 00:25:47.703 "method": "bdev_nvme_attach_controller" 00:25:47.703 }' 00:25:47.703 [2024-12-07 05:40:50.759802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.703 [2024-12-07 05:40:50.822345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.084 Running I/O for 10 seconds... 00:25:49.656 05:40:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:49.656 05:40:52 -- common/autotest_common.sh@862 -- # return 0 00:25:49.656 05:40:52 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:49.656 05:40:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.656 05:40:52 -- common/autotest_common.sh@10 -- # set +x 00:25:49.656 05:40:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.656 05:40:52 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:49.656 05:40:52 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:49.656 05:40:52 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:49.656 05:40:52 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:49.656 05:40:52 -- target/shutdown.sh@57 -- # local ret=1 00:25:49.656 05:40:52 -- target/shutdown.sh@58 -- # local i 00:25:49.656 05:40:52 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:49.656 05:40:52 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:49.656 05:40:52 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:49.656 05:40:52 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:49.656 05:40:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.656 05:40:52 -- common/autotest_common.sh@10 -- # set +x 00:25:49.656 05:40:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.656 05:40:52 -- target/shutdown.sh@60 -- # read_io_count=211 00:25:49.656 05:40:52 -- target/shutdown.sh@63 -- # '[' 211 -ge 100 ']' 00:25:49.656 05:40:52 -- target/shutdown.sh@64 -- # ret=0 00:25:49.656 05:40:52 -- target/shutdown.sh@65 -- # break 00:25:49.656 05:40:52 -- target/shutdown.sh@69 -- # return 0 00:25:49.656 05:40:52 -- target/shutdown.sh@134 -- # killprocess 1937258 00:25:49.656 05:40:52 -- common/autotest_common.sh@936 -- # '[' -z 1937258 ']' 00:25:49.656 05:40:52 -- common/autotest_common.sh@940 -- # kill -0 1937258 00:25:49.656 05:40:52 -- common/autotest_common.sh@941 -- # uname 00:25:49.656 05:40:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:49.656 05:40:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1937258 00:25:49.944 05:40:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:49.944 05:40:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:49.944 05:40:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1937258' 00:25:49.944 killing process with pid 1937258 00:25:49.944 05:40:52 -- common/autotest_common.sh@955 -- # kill 1937258 00:25:49.944 05:40:52 -- common/autotest_common.sh@960 -- # wait 1937258 00:25:49.944 [2024-12-07 05:40:52.924868] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.944 [2024-12-07 05:40:52.924932] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.944 [2024-12-07 05:40:52.924939] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.944 [2024-12-07 05:40:52.924945] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.944 [2024-12-07 05:40:52.924950] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.944 [2024-12-07 05:40:52.924956] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.944 [2024-12-07 05:40:52.924961] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.944 [2024-12-07 05:40:52.924966] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.944 [2024-12-07 05:40:52.924977] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.944 [2024-12-07 05:40:52.924982] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.944 [2024-12-07 05:40:52.924987] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.944 [2024-12-07 05:40:52.924991] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.944 [2024-12-07 05:40:52.924997] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925001] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925006] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925017] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925022] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925028] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925032] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925037] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925042] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925047] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925052] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925056] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925061] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925066] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925071] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925076] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925080] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925087] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925092] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925097] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925102] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925106] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925111] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925117] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925123] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925128] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925133] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925138] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925144] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925149] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925154] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925159] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925164] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925169] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925173] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925178] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925183] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925188] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925192] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925197] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925202] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925207] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925212] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925217] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925222] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925227] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.925232] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b100 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.926087] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.926115] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.926122] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.926131] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.926136] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.926142] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.926147] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.926152] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.926156] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.926161] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.926166] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.926171] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.926177] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.926182] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.926187] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.926192] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.926197] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.926202] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.926206] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.945 [2024-12-07 05:40:52.926211] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.926216] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.926221] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.926226] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.926231] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.926236] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.926241] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.926246] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.926250] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.926255] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.926260] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.926264] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.926270] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.926276] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.926281] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.926286] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.926291] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.926296] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.926301] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.926306] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.926311] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.926316] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.926321] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.926327] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.926332] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.926338] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.926343] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9da70 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927331] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927344] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927350] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927355] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927360] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927365] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927369] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927374] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927380] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927385] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927390] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927395] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927403] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927408] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927413] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927418] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927422] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927428] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927434] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927440] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927445] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927450] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927455] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927459] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927464] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927469] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927474] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927479] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927484] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927489] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927494] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927499] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927504] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927508] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927513] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.946 [2024-12-07 05:40:52.927518] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.947 [2024-12-07 05:40:52.927523] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.947 [2024-12-07 05:40:52.927527] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.947 [2024-12-07 05:40:52.927533] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.947 [2024-12-07 05:40:52.927539] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.947 [2024-12-07 05:40:52.927544] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.947 [2024-12-07 05:40:52.927549] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.947 [2024-12-07 05:40:52.927553] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.947 [2024-12-07 05:40:52.927558] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.947 [2024-12-07 05:40:52.927563] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.947 [2024-12-07 05:40:52.927567] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.947 [2024-12-07 05:40:52.927572] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.947 [2024-12-07 05:40:52.927577] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.947 [2024-12-07 05:40:52.927581] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.947 [2024-12-07 05:40:52.927587] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.947 [2024-12-07 05:40:52.927592] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.947 [2024-12-07 05:40:52.927597] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.947 [2024-12-07 05:40:52.927602] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.947 [2024-12-07 05:40:52.927606] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.947 [2024-12-07 05:40:52.927611] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.947 [2024-12-07 05:40:52.927616] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.947 [2024-12-07 05:40:52.927620] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.947 [2024-12-07 05:40:52.927625] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.947 [2024-12-07 05:40:52.927629] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.947 [2024-12-07 05:40:52.927635] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.947 [2024-12-07 05:40:52.927640] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.947 [2024-12-07 05:40:52.927644] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.947 [2024-12-07 05:40:52.927649] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9b590 is same with the state(5) to be set 00:25:49.947 [2024-12-07 05:40:52.928154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.947 [2024-12-07 05:40:52.928188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.947 [2024-12-07 05:40:52.928207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.947 [2024-12-07 05:40:52.928220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.947 [2024-12-07 05:40:52.928231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.947 [2024-12-07 05:40:52.928239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.947 [2024-12-07 05:40:52.928248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.947 [2024-12-07 05:40:52.928256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.947 [2024-12-07 05:40:52.928265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.947 [2024-12-07 05:40:52.928273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.947 [2024-12-07 05:40:52.928283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.947 [2024-12-07 05:40:52.928290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.947 [2024-12-07 05:40:52.928300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.947 [2024-12-07 05:40:52.928307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.947 [2024-12-07 05:40:52.928317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.947 [2024-12-07 05:40:52.928325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.947 [2024-12-07 05:40:52.928334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.947 [2024-12-07 05:40:52.928342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.947 [2024-12-07 05:40:52.928351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.947 [2024-12-07 05:40:52.928359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.947 [2024-12-07 05:40:52.928368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.947 [2024-12-07 05:40:52.928376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.947 [2024-12-07 05:40:52.928386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.947 [2024-12-07 05:40:52.928394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.947 [2024-12-07 05:40:52.928403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.947 [2024-12-07 05:40:52.928411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.947 [2024-12-07 05:40:52.928420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.947 [2024-12-07 05:40:52.928428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.947 [2024-12-07 05:40:52.928439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.947 [2024-12-07 05:40:52.928447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.947 [2024-12-07 05:40:52.928456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.947 [2024-12-07 05:40:52.928464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.947 [2024-12-07 05:40:52.928457] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.947 [2024-12-07 05:40:52.928475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.947 [2024-12-07 05:40:52.928483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-07 05:40:52.928483] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.947 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.947 [2024-12-07 05:40:52.928490] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30720 len:12[2024-12-07 05:40:52.928496] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.948 he state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928503] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.948 [2024-12-07 05:40:52.928509] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928514] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.948 [2024-12-07 05:40:52.928520] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.948 [2024-12-07 05:40:52.928526] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928532] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.948 [2024-12-07 05:40:52.928537] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928543] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.948 [2024-12-07 05:40:52.928548] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928553] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.948 [2024-12-07 05:40:52.928558] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-07 05:40:52.928564] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.948 he state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928573] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.948 [2024-12-07 05:40:52.928578] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928584] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.948 [2024-12-07 05:40:52.928590] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928595] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.948 [2024-12-07 05:40:52.928601] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.948 [2024-12-07 05:40:52.928606] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928612] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.948 [2024-12-07 05:40:52.928616] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928622] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with t[2024-12-07 05:40:52.928622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:25:49.948 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.948 [2024-12-07 05:40:52.928630] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32512 len:12[2024-12-07 05:40:52.928635] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.948 he state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928642] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.948 [2024-12-07 05:40:52.928647] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928652] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with t[2024-12-07 05:40:52.928653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37120 len:1he state(5) to be set 00:25:49.948 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.948 [2024-12-07 05:40:52.928668] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.948 [2024-12-07 05:40:52.928673] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928679] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.948 [2024-12-07 05:40:52.928684] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-07 05:40:52.928690] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.948 he state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928697] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.948 [2024-12-07 05:40:52.928702] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928708] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.948 [2024-12-07 05:40:52.928714] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928719] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with t[2024-12-07 05:40:52.928719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32768 len:12he state(5) to be set 00:25:49.948 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.948 [2024-12-07 05:40:52.928726] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.948 [2024-12-07 05:40:52.928729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.948 [2024-12-07 05:40:52.928731] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.949 [2024-12-07 05:40:52.928737] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.949 [2024-12-07 05:40:52.928739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.949 [2024-12-07 05:40:52.928742] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.949 [2024-12-07 05:40:52.928747] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.949 [2024-12-07 05:40:52.928747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.949 [2024-12-07 05:40:52.928753] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.949 [2024-12-07 05:40:52.928757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33024 len:12[2024-12-07 05:40:52.928759] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.949 he state(5) to be set 00:25:49.949 [2024-12-07 05:40:52.928766] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.949 [2024-12-07 05:40:52.928767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.949 [2024-12-07 05:40:52.928771] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.949 [2024-12-07 05:40:52.928777] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.949 [2024-12-07 05:40:52.928777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.949 [2024-12-07 05:40:52.928782] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.949 [2024-12-07 05:40:52.928785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.949 [2024-12-07 05:40:52.928788] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.949 [2024-12-07 05:40:52.928793] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.949 [2024-12-07 05:40:52.928795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.949 [2024-12-07 05:40:52.928798] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.949 [2024-12-07 05:40:52.928804] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.949 [2024-12-07 05:40:52.928803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.949 [2024-12-07 05:40:52.928809] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.949 [2024-12-07 05:40:52.928815] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.949 [2024-12-07 05:40:52.928815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.949 [2024-12-07 05:40:52.928820] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba20 is same with the state(5) to be set 00:25:49.949 [2024-12-07 05:40:52.928823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.949 [2024-12-07 05:40:52.928833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.949 [2024-12-07 05:40:52.928841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.949 [2024-12-07 05:40:52.928851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.949 [2024-12-07 05:40:52.928858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.949 [2024-12-07 05:40:52.928868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.949 [2024-12-07 05:40:52.928875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.949 [2024-12-07 05:40:52.928886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.949 [2024-12-07 05:40:52.928893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.949 [2024-12-07 05:40:52.928903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.949 [2024-12-07 05:40:52.928910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.949 [2024-12-07 05:40:52.928920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.949 [2024-12-07 05:40:52.928927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.949 [2024-12-07 05:40:52.928937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.949 [2024-12-07 05:40:52.928944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.949 [2024-12-07 05:40:52.928953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.949 [2024-12-07 05:40:52.928961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.949 [2024-12-07 05:40:52.928971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.949 [2024-12-07 05:40:52.928978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.949 [2024-12-07 05:40:52.928987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.949 [2024-12-07 05:40:52.928994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.949 [2024-12-07 05:40:52.929004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.949 [2024-12-07 05:40:52.929019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.949 [2024-12-07 05:40:52.929029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.949 [2024-12-07 05:40:52.929037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.949 [2024-12-07 05:40:52.929046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.949 [2024-12-07 05:40:52.929053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.949 [2024-12-07 05:40:52.929063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.949 [2024-12-07 05:40:52.929071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.949 [2024-12-07 05:40:52.929080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.949 [2024-12-07 05:40:52.929088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.949 [2024-12-07 05:40:52.929097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.949 [2024-12-07 05:40:52.929109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.949 [2024-12-07 05:40:52.929121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.949 [2024-12-07 05:40:52.929128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.950 [2024-12-07 05:40:52.929138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.950 [2024-12-07 05:40:52.929145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.950 [2024-12-07 05:40:52.929154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.950 [2024-12-07 05:40:52.929161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.950 [2024-12-07 05:40:52.929171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.950 [2024-12-07 05:40:52.929179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.950 [2024-12-07 05:40:52.929188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.950 [2024-12-07 05:40:52.929195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.950 [2024-12-07 05:40:52.929205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.950 [2024-12-07 05:40:52.929212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.950 [2024-12-07 05:40:52.929221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.950 [2024-12-07 05:40:52.929229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.950 [2024-12-07 05:40:52.929243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.950 [2024-12-07 05:40:52.929250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.950 [2024-12-07 05:40:52.929259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.950 [2024-12-07 05:40:52.929267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.950 [2024-12-07 05:40:52.929276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.950 [2024-12-07 05:40:52.929284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.950 [2024-12-07 05:40:52.929293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.950 [2024-12-07 05:40:52.929301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.950 [2024-12-07 05:40:52.929310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.950 [2024-12-07 05:40:52.929317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.950 [2024-12-07 05:40:52.929326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.950 [2024-12-07 05:40:52.929335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.950 [2024-12-07 05:40:52.929345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.950 [2024-12-07 05:40:52.929352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.950 [2024-12-07 05:40:52.929481] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.950 [2024-12-07 05:40:52.929498] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.950 [2024-12-07 05:40:52.929503] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.950 [2024-12-07 05:40:52.929508] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.950 [2024-12-07 05:40:52.929513] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.950 [2024-12-07 05:40:52.929518] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.950 [2024-12-07 05:40:52.929523] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.950 [2024-12-07 05:40:52.929529] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.950 [2024-12-07 05:40:52.929534] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.950 [2024-12-07 05:40:52.929539] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.950 [2024-12-07 05:40:52.929545] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.950 [2024-12-07 05:40:52.929550] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.950 [2024-12-07 05:40:52.929555] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.950 [2024-12-07 05:40:52.929559] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.950 [2024-12-07 05:40:52.929564] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.950 [2024-12-07 05:40:52.929569] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.950 [2024-12-07 05:40:52.929574] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.950 [2024-12-07 05:40:52.929579] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.950 [2024-12-07 05:40:52.929584] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.950 [2024-12-07 05:40:52.929590] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.950 [2024-12-07 05:40:52.929596] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.950 [2024-12-07 05:40:52.929601] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.950 [2024-12-07 05:40:52.929606] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.950 [2024-12-07 05:40:52.929614] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929618] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929623] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929628] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929633] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929639] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929644] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929649] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929654] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929659] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929664] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929669] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929664] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x177b0c0 was disconnected and fr[2024-12-07 05:40:52.929674] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with teed. reset controller. 00:25:49.951 he state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929681] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929686] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929692] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929697] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929702] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929707] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929712] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929717] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929722] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929726] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929731] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929736] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929741] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929746] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929752] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929757] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929762] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929766] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929771] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929776] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929781] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929786] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929791] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929796] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929802] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929807] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.929811] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c360 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.930372] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.930385] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.930390] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.930396] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.930400] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.930405] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.930410] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.930415] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.930420] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.930425] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.930430] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.930435] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.930440] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.930445] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.930454] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.930459] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.930464] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.930469] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.930474] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.930479] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.930484] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.930489] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.930494] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.930499] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.930503] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.930508] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.951 [2024-12-07 05:40:52.930513] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930518] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930523] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930528] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930533] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930538] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930543] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930548] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930552] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930557] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930562] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930566] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930571] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930576] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930582] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930588] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930593] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930597] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930602] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930607] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930612] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930617] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930621] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930627] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930632] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930636] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930642] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930646] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930651] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930656] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930660] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930665] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930670] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930674] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930680] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930685] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.930689] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9c810 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.931337] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.931352] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.931357] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.931362] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.931367] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.931373] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.931382] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.931387] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.931392] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.931396] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.931401] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.931406] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.931411] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.931416] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.931421] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.931426] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.931431] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.931436] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.931440] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.931445] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.931450] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.931455] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.931459] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.931464] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.931468] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.931474] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.931479] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.931483] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.931488] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.931493] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.952 [2024-12-07 05:40:52.931498] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.953 [2024-12-07 05:40:52.931503] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.953 [2024-12-07 05:40:52.931508] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.953 [2024-12-07 05:40:52.931514] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.953 [2024-12-07 05:40:52.931519] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.953 [2024-12-07 05:40:52.931525] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.953 [2024-12-07 05:40:52.931530] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.953 [2024-12-07 05:40:52.931535] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.953 [2024-12-07 05:40:52.937665] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:49.953 [2024-12-07 05:40:52.937722] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5fefe0 (9): Bad file descriptor 00:25:49.953 [2024-12-07 05:40:52.937749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.953 [2024-12-07 05:40:52.937762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.953 [2024-12-07 05:40:52.937771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.953 [2024-12-07 05:40:52.937780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.953 [2024-12-07 05:40:52.937790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.953 [2024-12-07 05:40:52.937798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.953 [2024-12-07 05:40:52.937809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.953 [2024-12-07 05:40:52.937818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.953 [2024-12-07 05:40:52.937826] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a4630 is same with the state(5) to be set 00:25:49.953 [2024-12-07 05:40:52.937868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.953 [2024-12-07 05:40:52.937880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.953 [2024-12-07 05:40:52.937889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.953 [2024-12-07 05:40:52.937898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.953 [2024-12-07 05:40:52.937908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.953 [2024-12-07 05:40:52.937917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.953 [2024-12-07 05:40:52.937927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.953 [2024-12-07 05:40:52.937936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.953 [2024-12-07 05:40:52.937945] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x583450 is same with the state(5) to be set 00:25:49.953 [2024-12-07 05:40:52.937968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.953 [2024-12-07 05:40:52.937982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.953 [2024-12-07 05:40:52.937992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.953 [2024-12-07 05:40:52.938001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.953 [2024-12-07 05:40:52.938009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.953 [2024-12-07 05:40:52.938024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.953 [2024-12-07 05:40:52.938033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.953 [2024-12-07 05:40:52.938040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.953 [2024-12-07 05:40:52.938047] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5abfd0 is same with the state(5) to be set 00:25:49.953 [2024-12-07 05:40:52.938070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.953 [2024-12-07 05:40:52.938079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.953 [2024-12-07 05:40:52.938088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.953 [2024-12-07 05:40:52.938095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.953 [2024-12-07 05:40:52.938103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.953 [2024-12-07 05:40:52.938110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.953 [2024-12-07 05:40:52.938118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.953 [2024-12-07 05:40:52.938125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.953 [2024-12-07 05:40:52.938133] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b27f0 is same with the state(5) to be set 00:25:49.953 [2024-12-07 05:40:52.938157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.953 [2024-12-07 05:40:52.938166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.953 [2024-12-07 05:40:52.938175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.953 [2024-12-07 05:40:52.938183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.953 [2024-12-07 05:40:52.938191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.953 [2024-12-07 05:40:52.938199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.953 [2024-12-07 05:40:52.938207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.953 [2024-12-07 05:40:52.938214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.953 [2024-12-07 05:40:52.938221] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x746c60 is same with the state(5) to be set 00:25:49.953 [2024-12-07 05:40:52.938247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.953 [2024-12-07 05:40:52.938257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.953 [2024-12-07 05:40:52.938265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.953 [2024-12-07 05:40:52.938272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.953 [2024-12-07 05:40:52.938280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.953 [2024-12-07 05:40:52.938287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.953 [2024-12-07 05:40:52.938295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.953 [2024-12-07 05:40:52.938303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.954 [2024-12-07 05:40:52.938310] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a3de0 is same with the state(5) to be set 00:25:49.954 [2024-12-07 05:40:52.938332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.954 [2024-12-07 05:40:52.938341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.954 [2024-12-07 05:40:52.938349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.954 [2024-12-07 05:40:52.938357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.954 [2024-12-07 05:40:52.938364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.954 [2024-12-07 05:40:52.938372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.954 [2024-12-07 05:40:52.938380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.954 [2024-12-07 05:40:52.938388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.954 [2024-12-07 05:40:52.938396] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5806f0 is same with the state(5) to be set 00:25:49.954 [2024-12-07 05:40:52.938569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.954 [2024-12-07 05:40:52.938583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.954 [2024-12-07 05:40:52.938594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.954 [2024-12-07 05:40:52.938603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.954 [2024-12-07 05:40:52.938613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.954 [2024-12-07 05:40:52.938620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.954 [2024-12-07 05:40:52.938630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.954 [2024-12-07 05:40:52.938641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.954 [2024-12-07 05:40:52.938651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.954 [2024-12-07 05:40:52.938659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.954 [2024-12-07 05:40:52.938668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.954 [2024-12-07 05:40:52.938676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.954 [2024-12-07 05:40:52.938686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.954 [2024-12-07 05:40:52.938693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.954 [2024-12-07 05:40:52.938703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.954 [2024-12-07 05:40:52.938710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.954 [2024-12-07 05:40:52.938720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.954 [2024-12-07 05:40:52.938728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.954 [2024-12-07 05:40:52.938738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.954 [2024-12-07 05:40:52.938745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.954 [2024-12-07 05:40:52.938755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.954 [2024-12-07 05:40:52.938763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.954 [2024-12-07 05:40:52.938773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.954 [2024-12-07 05:40:52.938780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.954 [2024-12-07 05:40:52.938789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.954 [2024-12-07 05:40:52.938797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.954 [2024-12-07 05:40:52.938807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.954 [2024-12-07 05:40:52.938814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.954 [2024-12-07 05:40:52.938823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.954 [2024-12-07 05:40:52.938831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.954 [2024-12-07 05:40:52.938841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.954 [2024-12-07 05:40:52.938848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.954 [2024-12-07 05:40:52.938859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.954 [2024-12-07 05:40:52.938866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.954 [2024-12-07 05:40:52.938876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.954 [2024-12-07 05:40:52.938884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.954 [2024-12-07 05:40:52.938893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.954 [2024-12-07 05:40:52.938900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.954 [2024-12-07 05:40:52.938910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.954 [2024-12-07 05:40:52.938917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.954 [2024-12-07 05:40:52.938927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.954 [2024-12-07 05:40:52.938934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.954 [2024-12-07 05:40:52.938944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.954 [2024-12-07 05:40:52.938951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.954 [2024-12-07 05:40:52.938961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.954 [2024-12-07 05:40:52.938968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.954 [2024-12-07 05:40:52.938978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.954 [2024-12-07 05:40:52.938985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.954 [2024-12-07 05:40:52.938994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.954 [2024-12-07 05:40:52.939002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.954 [2024-12-07 05:40:52.939018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.955 [2024-12-07 05:40:52.939025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.955 [2024-12-07 05:40:52.939035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.955 [2024-12-07 05:40:52.939043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.955 [2024-12-07 05:40:52.939052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.955 [2024-12-07 05:40:52.939060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.955 [2024-12-07 05:40:52.939069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.955 [2024-12-07 05:40:52.939080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.955 [2024-12-07 05:40:52.939089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.955 [2024-12-07 05:40:52.939096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.955 [2024-12-07 05:40:52.939106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.955 [2024-12-07 05:40:52.939113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.955 [2024-12-07 05:40:52.939123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.955 [2024-12-07 05:40:52.939131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.955 [2024-12-07 05:40:52.939140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.955 [2024-12-07 05:40:52.939148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.955 [2024-12-07 05:40:52.939158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.955 [2024-12-07 05:40:52.939165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.955 [2024-12-07 05:40:52.939175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.955 [2024-12-07 05:40:52.939182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.955 [2024-12-07 05:40:52.939192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.955 [2024-12-07 05:40:52.939201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.955 [2024-12-07 05:40:52.939210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.955 [2024-12-07 05:40:52.939217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.955 [2024-12-07 05:40:52.939226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.955 [2024-12-07 05:40:52.939234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.955 [2024-12-07 05:40:52.939244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.955 [2024-12-07 05:40:52.939251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.955 [2024-12-07 05:40:52.939260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.955 [2024-12-07 05:40:52.939269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.955 [2024-12-07 05:40:52.939279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.955 [2024-12-07 05:40:52.939287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.955 [2024-12-07 05:40:52.939298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.955 [2024-12-07 05:40:52.939305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.955 [2024-12-07 05:40:52.939315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.955 [2024-12-07 05:40:52.939322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.955 [2024-12-07 05:40:52.939332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.955 [2024-12-07 05:40:52.939339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.955 [2024-12-07 05:40:52.939349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.955 [2024-12-07 05:40:52.939356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.955 [2024-12-07 05:40:52.939366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.955 [2024-12-07 05:40:52.939373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.955 [2024-12-07 05:40:52.939382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.955 [2024-12-07 05:40:52.939390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.955 [2024-12-07 05:40:52.939400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.955 [2024-12-07 05:40:52.939407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.955 [2024-12-07 05:40:52.939416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.955 [2024-12-07 05:40:52.939424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.955 [2024-12-07 05:40:52.939433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.955 [2024-12-07 05:40:52.939441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.955 [2024-12-07 05:40:52.939450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.955 [2024-12-07 05:40:52.939458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.955 [2024-12-07 05:40:52.939469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.955 [2024-12-07 05:40:52.939476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.955 [2024-12-07 05:40:52.939485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.955 [2024-12-07 05:40:52.939492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.955 [2024-12-07 05:40:52.939502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.955 [2024-12-07 05:40:52.939511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.955 [2024-12-07 05:40:52.939521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.955 [2024-12-07 05:40:52.939528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.955 [2024-12-07 05:40:52.939537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.955 [2024-12-07 05:40:52.939544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.955 [2024-12-07 05:40:52.939555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.955 [2024-12-07 05:40:52.939562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.955 [2024-12-07 05:40:52.939571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.956 [2024-12-07 05:40:52.939578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.956 [2024-12-07 05:40:52.939588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.956 [2024-12-07 05:40:52.939596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.956 [2024-12-07 05:40:52.939606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.956 [2024-12-07 05:40:52.939613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.956 [2024-12-07 05:40:52.939622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.956 [2024-12-07 05:40:52.939630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.956 [2024-12-07 05:40:52.939639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.956 [2024-12-07 05:40:52.939646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.956 [2024-12-07 05:40:52.939655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.956 [2024-12-07 05:40:52.939663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.956 [2024-12-07 05:40:52.939673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.956 [2024-12-07 05:40:52.939681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.956 [2024-12-07 05:40:52.939728] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd81fe0 was disconnected and freed. reset controller. 00:25:49.956 [2024-12-07 05:40:52.940734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.956 [2024-12-07 05:40:52.940753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.956 [2024-12-07 05:40:52.940765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.956 [2024-12-07 05:40:52.940776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.956 [2024-12-07 05:40:52.940786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.956 [2024-12-07 05:40:52.940794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.956 [2024-12-07 05:40:52.940804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.956 [2024-12-07 05:40:52.940811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.956 [2024-12-07 05:40:52.940821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.956 [2024-12-07 05:40:52.940829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.956 [2024-12-07 05:40:52.940838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.956 [2024-12-07 05:40:52.940846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.956 [2024-12-07 05:40:52.940855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.956 [2024-12-07 05:40:52.940864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.956 [2024-12-07 05:40:52.940874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.956 [2024-12-07 05:40:52.940881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.956 [2024-12-07 05:40:52.940891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.956 [2024-12-07 05:40:52.940899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.956 [2024-12-07 05:40:52.940909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.956 [2024-12-07 05:40:52.940916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.956 [2024-12-07 05:40:52.940927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.956 [2024-12-07 05:40:52.940934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.956 [2024-12-07 05:40:52.940944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.956 [2024-12-07 05:40:52.940952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.956 [2024-12-07 05:40:52.940961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.956 [2024-12-07 05:40:52.940969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.956 [2024-12-07 05:40:52.940979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.956 [2024-12-07 05:40:52.940987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.956 [2024-12-07 05:40:52.940998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.956 [2024-12-07 05:40:52.941005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.956 [2024-12-07 05:40:52.941021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.956 [2024-12-07 05:40:52.941030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.956 [2024-12-07 05:40:52.941040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.956 [2024-12-07 05:40:52.941047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.956 [2024-12-07 05:40:52.946653] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.956 [2024-12-07 05:40:52.946677] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.956 [2024-12-07 05:40:52.946687] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.956 [2024-12-07 05:40:52.946694] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.946701] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.946708] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.946714] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.946720] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.946727] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.946733] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.946739] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.946745] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.946752] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.946759] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.946765] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.946772] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.946778] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.946784] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.946791] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.946798] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.946804] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.946815] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.946821] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.946828] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.946834] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cca0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947640] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947654] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947660] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947665] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947671] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947676] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947681] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947686] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947691] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947696] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947701] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947706] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947712] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947717] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947722] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947726] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947731] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947736] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947742] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947747] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947752] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947757] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947761] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947766] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947776] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947782] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947787] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947792] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947796] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947801] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947805] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947810] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947815] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947821] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947826] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947830] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947835] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947840] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947844] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947849] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947854] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947860] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947865] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947869] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947874] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947879] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947883] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.957 [2024-12-07 05:40:52.947887] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.958 [2024-12-07 05:40:52.947892] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.958 [2024-12-07 05:40:52.947897] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.958 [2024-12-07 05:40:52.947902] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.958 [2024-12-07 05:40:52.947908] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.958 [2024-12-07 05:40:52.947913] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.958 [2024-12-07 05:40:52.947918] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.958 [2024-12-07 05:40:52.947922] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.958 [2024-12-07 05:40:52.947927] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.958 [2024-12-07 05:40:52.947931] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.958 [2024-12-07 05:40:52.947936] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.958 [2024-12-07 05:40:52.947941] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.958 [2024-12-07 05:40:52.947946] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.958 [2024-12-07 05:40:52.947950] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.958 [2024-12-07 05:40:52.947955] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.958 [2024-12-07 05:40:52.947959] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d5e0 is same with the state(5) to be set 00:25:49.958 [2024-12-07 05:40:52.953607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.958 [2024-12-07 05:40:52.953637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.958 [2024-12-07 05:40:52.953650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.958 [2024-12-07 05:40:52.953659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.958 [2024-12-07 05:40:52.953669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.958 [2024-12-07 05:40:52.953677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.958 [2024-12-07 05:40:52.953687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.958 [2024-12-07 05:40:52.953695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.958 [2024-12-07 05:40:52.953704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.958 [2024-12-07 05:40:52.953712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.958 [2024-12-07 05:40:52.953723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.958 [2024-12-07 05:40:52.953731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.958 [2024-12-07 05:40:52.953743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.958 [2024-12-07 05:40:52.953750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.958 [2024-12-07 05:40:52.953765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.958 [2024-12-07 05:40:52.953773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.958 [2024-12-07 05:40:52.953783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.958 [2024-12-07 05:40:52.953790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.958 [2024-12-07 05:40:52.953801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.958 [2024-12-07 05:40:52.953808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.958 [2024-12-07 05:40:52.953818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.958 [2024-12-07 05:40:52.953826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.958 [2024-12-07 05:40:52.953835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.958 [2024-12-07 05:40:52.953843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.958 [2024-12-07 05:40:52.953852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.958 [2024-12-07 05:40:52.953860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.958 [2024-12-07 05:40:52.953870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.958 [2024-12-07 05:40:52.953878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.958 [2024-12-07 05:40:52.953889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.958 [2024-12-07 05:40:52.953896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.958 [2024-12-07 05:40:52.953907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.958 [2024-12-07 05:40:52.953915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.958 [2024-12-07 05:40:52.953925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.958 [2024-12-07 05:40:52.953932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.958 [2024-12-07 05:40:52.953942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.958 [2024-12-07 05:40:52.953950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.958 [2024-12-07 05:40:52.953960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.958 [2024-12-07 05:40:52.953968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.958 [2024-12-07 05:40:52.953979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.958 [2024-12-07 05:40:52.953988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.958 [2024-12-07 05:40:52.953998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.958 [2024-12-07 05:40:52.954006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.958 [2024-12-07 05:40:52.954024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.958 [2024-12-07 05:40:52.954033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.958 [2024-12-07 05:40:52.954043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.958 [2024-12-07 05:40:52.954050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.958 [2024-12-07 05:40:52.954061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.959 [2024-12-07 05:40:52.954069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.959 [2024-12-07 05:40:52.954079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.959 [2024-12-07 05:40:52.954086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.959 [2024-12-07 05:40:52.954097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.959 [2024-12-07 05:40:52.954104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.959 [2024-12-07 05:40:52.954114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.959 [2024-12-07 05:40:52.954121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.959 [2024-12-07 05:40:52.954131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.959 [2024-12-07 05:40:52.954138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.959 [2024-12-07 05:40:52.954148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.959 [2024-12-07 05:40:52.954155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.959 [2024-12-07 05:40:52.954165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.959 [2024-12-07 05:40:52.954173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.959 [2024-12-07 05:40:52.954183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.959 [2024-12-07 05:40:52.954190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.959 [2024-12-07 05:40:52.954200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.959 [2024-12-07 05:40:52.954208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.959 [2024-12-07 05:40:52.954219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.959 [2024-12-07 05:40:52.954226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.959 [2024-12-07 05:40:52.954237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.959 [2024-12-07 05:40:52.954244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.959 [2024-12-07 05:40:52.954254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.959 [2024-12-07 05:40:52.954262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.959 [2024-12-07 05:40:52.954271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.959 [2024-12-07 05:40:52.954278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.959 [2024-12-07 05:40:52.954288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.959 [2024-12-07 05:40:52.954295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.959 [2024-12-07 05:40:52.954305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.959 [2024-12-07 05:40:52.954313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.959 [2024-12-07 05:40:52.954322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.959 [2024-12-07 05:40:52.954330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.959 [2024-12-07 05:40:52.954339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.959 [2024-12-07 05:40:52.954347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.959 [2024-12-07 05:40:52.954357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.959 [2024-12-07 05:40:52.954365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.959 [2024-12-07 05:40:52.954377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.959 [2024-12-07 05:40:52.954384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.959 [2024-12-07 05:40:52.954394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.959 [2024-12-07 05:40:52.954403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.959 [2024-12-07 05:40:52.954412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.959 [2024-12-07 05:40:52.954420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.959 [2024-12-07 05:40:52.954430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.959 [2024-12-07 05:40:52.954443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.959 [2024-12-07 05:40:52.954452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.959 [2024-12-07 05:40:52.954460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.959 [2024-12-07 05:40:52.954471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.959 [2024-12-07 05:40:52.954479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.959 [2024-12-07 05:40:52.954488] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c5ed0 is same with the state(5) to be set 00:25:49.959 [2024-12-07 05:40:52.954533] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x5c5ed0 was disconnected and freed. reset controller. 00:25:49.959 [2024-12-07 05:40:52.956033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.959 [2024-12-07 05:40:52.956055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.959 [2024-12-07 05:40:52.956072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.959 [2024-12-07 05:40:52.956081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.959 [2024-12-07 05:40:52.956093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.959 [2024-12-07 05:40:52.956103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.959 [2024-12-07 05:40:52.956115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.959 [2024-12-07 05:40:52.956123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.959 [2024-12-07 05:40:52.956133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.959 [2024-12-07 05:40:52.956140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.959 [2024-12-07 05:40:52.956150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.959 [2024-12-07 05:40:52.956158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.959 [2024-12-07 05:40:52.956168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.960 [2024-12-07 05:40:52.956176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.960 [2024-12-07 05:40:52.956185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.960 [2024-12-07 05:40:52.956193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.960 [2024-12-07 05:40:52.956203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.960 [2024-12-07 05:40:52.956211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.960 [2024-12-07 05:40:52.956225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.960 [2024-12-07 05:40:52.956233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.960 [2024-12-07 05:40:52.956243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.960 [2024-12-07 05:40:52.956250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.960 [2024-12-07 05:40:52.956261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.960 [2024-12-07 05:40:52.956269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.960 [2024-12-07 05:40:52.956279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.960 [2024-12-07 05:40:52.956286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.960 [2024-12-07 05:40:52.956296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.960 [2024-12-07 05:40:52.956303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.960 [2024-12-07 05:40:52.956313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.960 [2024-12-07 05:40:52.956320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.960 [2024-12-07 05:40:52.956330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.960 [2024-12-07 05:40:52.956337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.960 [2024-12-07 05:40:52.956347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.960 [2024-12-07 05:40:52.956354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.960 [2024-12-07 05:40:52.956364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.960 [2024-12-07 05:40:52.956372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.960 [2024-12-07 05:40:52.956381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.960 [2024-12-07 05:40:52.956389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.960 [2024-12-07 05:40:52.956399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.960 [2024-12-07 05:40:52.956406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.960 [2024-12-07 05:40:52.956416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.960 [2024-12-07 05:40:52.956423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.960 [2024-12-07 05:40:52.956433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.960 [2024-12-07 05:40:52.956442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.960 [2024-12-07 05:40:52.956452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.960 [2024-12-07 05:40:52.956459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.960 [2024-12-07 05:40:52.956470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.960 [2024-12-07 05:40:52.956478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.960 [2024-12-07 05:40:52.956489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d200 is same with the state(5) to be set 00:25:49.960 [2024-12-07 05:40:52.956528] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x140d200 was disconnected and freed. reset controller. 00:25:49.960 [2024-12-07 05:40:52.956855] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:49.960 [2024-12-07 05:40:52.956881] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5a3de0 (9): Bad file descriptor 00:25:49.960 [2024-12-07 05:40:52.957315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-12-07 05:40:52.957563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-12-07 05:40:52.957576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fefe0 with addr=10.0.0.2, port=4420 00:25:49.960 [2024-12-07 05:40:52.957587] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fefe0 is same with the state(5) to be set 00:25:49.960 [2024-12-07 05:40:52.957615] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5a4630 (9): Bad file descriptor 00:25:49.960 [2024-12-07 05:40:52.957653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.960 [2024-12-07 05:40:52.957664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.960 [2024-12-07 05:40:52.957674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.960 [2024-12-07 05:40:52.957682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.960 [2024-12-07 05:40:52.957692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.960 [2024-12-07 05:40:52.957700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.960 [2024-12-07 05:40:52.957708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.960 [2024-12-07 05:40:52.957716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.960 [2024-12-07 05:40:52.957723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613630 is same with the state(5) to be set 00:25:49.960 [2024-12-07 05:40:52.957744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.960 [2024-12-07 05:40:52.957754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.960 [2024-12-07 05:40:52.957763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.960 [2024-12-07 05:40:52.957770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.960 [2024-12-07 05:40:52.957783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.960 [2024-12-07 05:40:52.957791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.960 [2024-12-07 05:40:52.957800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.960 [2024-12-07 05:40:52.957807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.960 [2024-12-07 05:40:52.957815] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613150 is same with the state(5) to be set 00:25:49.960 [2024-12-07 05:40:52.957834] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x583450 (9): Bad file descriptor 00:25:49.960 [2024-12-07 05:40:52.957850] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5abfd0 (9): Bad file descriptor 00:25:49.961 [2024-12-07 05:40:52.957867] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b27f0 (9): Bad file descriptor 00:25:49.961 [2024-12-07 05:40:52.957884] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x746c60 (9): Bad file descriptor 00:25:49.961 [2024-12-07 05:40:52.957903] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5806f0 (9): Bad file descriptor 00:25:49.961 [2024-12-07 05:40:52.957917] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5fefe0 (9): Bad file descriptor 00:25:49.961 [2024-12-07 05:40:52.961174] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.961 [2024-12-07 05:40:52.961203] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:25:49.961 [2024-12-07 05:40:52.961218] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613630 (9): Bad file descriptor 00:25:49.961 [2024-12-07 05:40:52.961301] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:49.961 [2024-12-07 05:40:52.961352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.961 [2024-12-07 05:40:52.961362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.961 [2024-12-07 05:40:52.961377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.961 [2024-12-07 05:40:52.961386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.961 [2024-12-07 05:40:52.961397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.961 [2024-12-07 05:40:52.961406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.961 [2024-12-07 05:40:52.961416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.961 [2024-12-07 05:40:52.961424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.961 [2024-12-07 05:40:52.961434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.961 [2024-12-07 05:40:52.961443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.961 [2024-12-07 05:40:52.961452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.961 [2024-12-07 05:40:52.961461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.961 [2024-12-07 05:40:52.961474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.961 [2024-12-07 05:40:52.961483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.961 [2024-12-07 05:40:52.961493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.961 [2024-12-07 05:40:52.961502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.961 [2024-12-07 05:40:52.961512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.961 [2024-12-07 05:40:52.961520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.961 [2024-12-07 05:40:52.961530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.961 [2024-12-07 05:40:52.961539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.961 [2024-12-07 05:40:52.961549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.961 [2024-12-07 05:40:52.961557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.961 [2024-12-07 05:40:52.961567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.961 [2024-12-07 05:40:52.961575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.961 [2024-12-07 05:40:52.961586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.961 [2024-12-07 05:40:52.961593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.961 [2024-12-07 05:40:52.961604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.961 [2024-12-07 05:40:52.961612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.961 [2024-12-07 05:40:52.961622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.961 [2024-12-07 05:40:52.961631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.961 [2024-12-07 05:40:52.961641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.961 [2024-12-07 05:40:52.961650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.961 [2024-12-07 05:40:52.961659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.961 [2024-12-07 05:40:52.961667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.961 [2024-12-07 05:40:52.961677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.961 [2024-12-07 05:40:52.961685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.961 [2024-12-07 05:40:52.961695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.961 [2024-12-07 05:40:52.961705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.961 [2024-12-07 05:40:52.961715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.961 [2024-12-07 05:40:52.961724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.961 [2024-12-07 05:40:52.961734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.961 [2024-12-07 05:40:52.961741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.961 [2024-12-07 05:40:52.961751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.961 [2024-12-07 05:40:52.961759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.961 [2024-12-07 05:40:52.961769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.961 [2024-12-07 05:40:52.961777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.961 [2024-12-07 05:40:52.961786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.961 [2024-12-07 05:40:52.961795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.961 [2024-12-07 05:40:52.961804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.961 [2024-12-07 05:40:52.961812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.962 [2024-12-07 05:40:52.961822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.962 [2024-12-07 05:40:52.961830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.962 [2024-12-07 05:40:52.961840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.962 [2024-12-07 05:40:52.961848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.962 [2024-12-07 05:40:52.961858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.962 [2024-12-07 05:40:52.961867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.962 [2024-12-07 05:40:52.961877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.962 [2024-12-07 05:40:52.961884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.962 [2024-12-07 05:40:52.961936] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x72c420 was disconnected and freed. reset controller. 00:25:49.962 [2024-12-07 05:40:52.962250] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:49.962 [2024-12-07 05:40:52.962293] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:49.962 [2024-12-07 05:40:52.962775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-12-07 05:40:52.963265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-12-07 05:40:52.963309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5a3de0 with addr=10.0.0.2, port=4420 00:25:49.962 [2024-12-07 05:40:52.963322] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a3de0 is same with the state(5) to be set 00:25:49.962 [2024-12-07 05:40:52.963534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-12-07 05:40:52.963949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-12-07 05:40:52.963960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5806f0 with addr=10.0.0.2, port=4420 00:25:49.962 [2024-12-07 05:40:52.963968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5806f0 is same with the state(5) to be set 00:25:49.962 [2024-12-07 05:40:52.963990] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:49.962 [2024-12-07 05:40:52.963998] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:49.962 [2024-12-07 05:40:52.964007] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:49.962 [2024-12-07 05:40:52.966509] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.962 [2024-12-07 05:40:52.966532] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:49.962 [2024-12-07 05:40:52.966766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-12-07 05:40:52.967116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-12-07 05:40:52.967128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613630 with addr=10.0.0.2, port=4420 00:25:49.962 [2024-12-07 05:40:52.967136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613630 is same with the state(5) to be set 00:25:49.962 [2024-12-07 05:40:52.967147] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5a3de0 (9): Bad file descriptor 00:25:49.962 [2024-12-07 05:40:52.967158] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5806f0 (9): Bad file descriptor 00:25:49.962 [2024-12-07 05:40:52.967289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.962 [2024-12-07 05:40:52.967303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.962 [2024-12-07 05:40:52.967317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.962 [2024-12-07 05:40:52.967326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.962 [2024-12-07 05:40:52.967335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.962 [2024-12-07 05:40:52.967343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.962 [2024-12-07 05:40:52.967353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.962 [2024-12-07 05:40:52.967361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.962 [2024-12-07 05:40:52.967371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.962 [2024-12-07 05:40:52.967378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.962 [2024-12-07 05:40:52.967388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.962 [2024-12-07 05:40:52.967400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.962 [2024-12-07 05:40:52.967409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.962 [2024-12-07 05:40:52.967417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.962 [2024-12-07 05:40:52.967426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.962 [2024-12-07 05:40:52.967435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.962 [2024-12-07 05:40:52.967445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.962 [2024-12-07 05:40:52.967453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.962 [2024-12-07 05:40:52.967463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.962 [2024-12-07 05:40:52.967470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.962 [2024-12-07 05:40:52.967479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.962 [2024-12-07 05:40:52.967487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.962 [2024-12-07 05:40:52.967497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.962 [2024-12-07 05:40:52.967505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.962 [2024-12-07 05:40:52.967514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.963 [2024-12-07 05:40:52.967521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.963 [2024-12-07 05:40:52.967531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.963 [2024-12-07 05:40:52.967538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.963 [2024-12-07 05:40:52.967548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.963 [2024-12-07 05:40:52.967556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.963 [2024-12-07 05:40:52.967566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.963 [2024-12-07 05:40:52.967574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.963 [2024-12-07 05:40:52.967583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.963 [2024-12-07 05:40:52.967591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.963 [2024-12-07 05:40:52.967601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.963 [2024-12-07 05:40:52.967608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.963 [2024-12-07 05:40:52.967619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.963 [2024-12-07 05:40:52.967627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.963 [2024-12-07 05:40:52.967638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.963 [2024-12-07 05:40:52.967645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.963 [2024-12-07 05:40:52.967654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.963 [2024-12-07 05:40:52.967662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.963 [2024-12-07 05:40:52.967672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.963 [2024-12-07 05:40:52.967680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.963 [2024-12-07 05:40:52.967689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.963 [2024-12-07 05:40:52.967696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.963 [2024-12-07 05:40:52.967706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.963 [2024-12-07 05:40:52.967714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.963 [2024-12-07 05:40:52.967723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.963 [2024-12-07 05:40:52.967730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.963 [2024-12-07 05:40:52.967741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.963 [2024-12-07 05:40:52.967749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.963 [2024-12-07 05:40:52.967760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.963 [2024-12-07 05:40:52.967767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.963 [2024-12-07 05:40:52.967776] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126a540 is same with the state(5) to be set 00:25:49.963 [2024-12-07 05:40:52.967818] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x126a540 was disconnected and freed. reset controller. 00:25:49.963 [2024-12-07 05:40:52.967860] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:49.963 [2024-12-07 05:40:52.968066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.963 [2024-12-07 05:40:52.968259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.963 [2024-12-07 05:40:52.968270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x583450 with addr=10.0.0.2, port=4420 00:25:49.963 [2024-12-07 05:40:52.968278] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x583450 is same with the state(5) to be set 00:25:49.963 [2024-12-07 05:40:52.968288] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613630 (9): Bad file descriptor 00:25:49.963 [2024-12-07 05:40:52.968297] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:49.963 [2024-12-07 05:40:52.968308] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:49.963 [2024-12-07 05:40:52.968316] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:49.963 [2024-12-07 05:40:52.968329] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.963 [2024-12-07 05:40:52.968336] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.963 [2024-12-07 05:40:52.968344] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.963 [2024-12-07 05:40:52.968367] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613150 (9): Bad file descriptor 00:25:49.963 [2024-12-07 05:40:52.968394] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:49.963 [2024-12-07 05:40:52.969737] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.963 [2024-12-07 05:40:52.969752] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.963 [2024-12-07 05:40:52.969775] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:49.963 [2024-12-07 05:40:52.969795] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x583450 (9): Bad file descriptor 00:25:49.963 [2024-12-07 05:40:52.969805] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:25:49.963 [2024-12-07 05:40:52.969813] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:25:49.963 [2024-12-07 05:40:52.969822] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:49.963 [2024-12-07 05:40:52.969860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.963 [2024-12-07 05:40:52.969870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.963 [2024-12-07 05:40:52.969883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.963 [2024-12-07 05:40:52.969892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.963 [2024-12-07 05:40:52.969904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.963 [2024-12-07 05:40:52.969914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.963 [2024-12-07 05:40:52.969925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.963 [2024-12-07 05:40:52.969934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.963 [2024-12-07 05:40:52.969946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.963 [2024-12-07 05:40:52.969953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.963 [2024-12-07 05:40:52.969963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.963 [2024-12-07 05:40:52.969970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.964 [2024-12-07 05:40:52.969980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.964 [2024-12-07 05:40:52.969987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.964 [2024-12-07 05:40:52.970000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.964 [2024-12-07 05:40:52.970007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.964 [2024-12-07 05:40:52.970023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.964 [2024-12-07 05:40:52.970030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.964 [2024-12-07 05:40:52.970040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.964 [2024-12-07 05:40:52.970048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.964 [2024-12-07 05:40:52.970057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.964 [2024-12-07 05:40:52.970064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.964 [2024-12-07 05:40:52.970075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.964 [2024-12-07 05:40:52.970084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.964 [2024-12-07 05:40:52.970093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.964 [2024-12-07 05:40:52.970101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.964 [2024-12-07 05:40:52.970111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.964 [2024-12-07 05:40:52.970118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.964 [2024-12-07 05:40:52.970127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.964 [2024-12-07 05:40:52.970134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.964 [2024-12-07 05:40:52.970145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.964 [2024-12-07 05:40:52.970153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.964 [2024-12-07 05:40:52.970162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.964 [2024-12-07 05:40:52.970169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.964 [2024-12-07 05:40:52.970179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.964 [2024-12-07 05:40:52.970186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.964 [2024-12-07 05:40:52.970196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.964 [2024-12-07 05:40:52.970205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.964 [2024-12-07 05:40:52.970215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.964 [2024-12-07 05:40:52.970226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.964 [2024-12-07 05:40:52.970236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.964 [2024-12-07 05:40:52.970243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.964 [2024-12-07 05:40:52.970252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.964 [2024-12-07 05:40:52.970261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.964 [2024-12-07 05:40:52.970271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.964 [2024-12-07 05:40:52.970278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.964 [2024-12-07 05:40:52.970287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.964 [2024-12-07 05:40:52.970295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.964 [2024-12-07 05:40:52.970304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.964 [2024-12-07 05:40:52.970312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.964 [2024-12-07 05:40:52.970321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.964 [2024-12-07 05:40:52.970329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.964 [2024-12-07 05:40:52.970339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.964 [2024-12-07 05:40:52.970346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.964 [2024-12-07 05:40:52.970356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.964 [2024-12-07 05:40:52.970363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.964 [2024-12-07 05:40:52.970372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.964 [2024-12-07 05:40:52.970380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.964 [2024-12-07 05:40:52.970390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.964 [2024-12-07 05:40:52.970398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.964 [2024-12-07 05:40:52.970407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.964 [2024-12-07 05:40:52.970415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.964 [2024-12-07 05:40:52.970424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.964 [2024-12-07 05:40:52.970431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.964 [2024-12-07 05:40:52.970444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.964 [2024-12-07 05:40:52.970452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.964 [2024-12-07 05:40:52.970462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.964 [2024-12-07 05:40:52.970470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.964 [2024-12-07 05:40:52.970481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.964 [2024-12-07 05:40:52.970488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.964 [2024-12-07 05:40:52.970498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.964 [2024-12-07 05:40:52.970506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.964 [2024-12-07 05:40:52.970515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.964 [2024-12-07 05:40:52.970523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.964 [2024-12-07 05:40:52.970533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.964 [2024-12-07 05:40:52.970541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.965 [2024-12-07 05:40:52.970551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.965 [2024-12-07 05:40:52.970559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.965 [2024-12-07 05:40:52.970569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.965 [2024-12-07 05:40:52.970577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.965 [2024-12-07 05:40:52.970587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.965 [2024-12-07 05:40:52.970595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.965 [2024-12-07 05:40:52.970604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.965 [2024-12-07 05:40:52.970611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.965 [2024-12-07 05:40:52.970622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.965 [2024-12-07 05:40:52.970629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.965 [2024-12-07 05:40:52.970639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.965 [2024-12-07 05:40:52.970647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.965 [2024-12-07 05:40:52.970657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.965 [2024-12-07 05:40:52.970666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.965 [2024-12-07 05:40:52.970676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.965 [2024-12-07 05:40:52.970683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.965 [2024-12-07 05:40:52.970693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.965 [2024-12-07 05:40:52.970701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.965 [2024-12-07 05:40:52.970710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.965 [2024-12-07 05:40:52.970719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.965 [2024-12-07 05:40:52.970728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.965 [2024-12-07 05:40:52.970736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.965 [2024-12-07 05:40:52.970746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.965 [2024-12-07 05:40:52.970754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.965 [2024-12-07 05:40:52.970764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.965 [2024-12-07 05:40:52.970772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.965 [2024-12-07 05:40:52.970782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.965 [2024-12-07 05:40:52.970789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.965 [2024-12-07 05:40:52.970800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.965 [2024-12-07 05:40:52.970807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.965 [2024-12-07 05:40:52.970817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.965 [2024-12-07 05:40:52.970824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.965 [2024-12-07 05:40:52.970834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.965 [2024-12-07 05:40:52.970842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.965 [2024-12-07 05:40:52.970851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.965 [2024-12-07 05:40:52.970859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.965 [2024-12-07 05:40:52.970868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.965 [2024-12-07 05:40:52.970876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.965 [2024-12-07 05:40:52.970890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.965 [2024-12-07 05:40:52.970899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.965 [2024-12-07 05:40:52.970908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.965 [2024-12-07 05:40:52.970916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.965 [2024-12-07 05:40:52.970926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.965 [2024-12-07 05:40:52.970934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.965 [2024-12-07 05:40:52.970943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.965 [2024-12-07 05:40:52.970951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.965 [2024-12-07 05:40:52.970960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.965 [2024-12-07 05:40:52.970968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.965 [2024-12-07 05:40:52.970978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.965 [2024-12-07 05:40:52.970985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.965 [2024-12-07 05:40:52.970995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.965 [2024-12-07 05:40:52.971003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.965 [2024-12-07 05:40:52.971016] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x570a20 is same with the state(5) to be set 00:25:49.965 [2024-12-07 05:40:52.972256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.965 [2024-12-07 05:40:52.972270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.965 [2024-12-07 05:40:52.972283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.965 [2024-12-07 05:40:52.972293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.965 [2024-12-07 05:40:52.972305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.965 [2024-12-07 05:40:52.972314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.965 [2024-12-07 05:40:52.972326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.965 [2024-12-07 05:40:52.972335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.965 [2024-12-07 05:40:52.972346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.965 [2024-12-07 05:40:52.972354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.965 [2024-12-07 05:40:52.972367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.965 [2024-12-07 05:40:52.972375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.965 [2024-12-07 05:40:52.972385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.966 [2024-12-07 05:40:52.972393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.966 [2024-12-07 05:40:52.972403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.966 [2024-12-07 05:40:52.972411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.966 [2024-12-07 05:40:52.972421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.966 [2024-12-07 05:40:52.972429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.966 [2024-12-07 05:40:52.972439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.966 [2024-12-07 05:40:52.972447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.966 [2024-12-07 05:40:52.972456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.966 [2024-12-07 05:40:52.972464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.966 [2024-12-07 05:40:52.972474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.966 [2024-12-07 05:40:52.972482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.966 [2024-12-07 05:40:52.972492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.966 [2024-12-07 05:40:52.972500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.966 [2024-12-07 05:40:52.972510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.966 [2024-12-07 05:40:52.972517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.966 [2024-12-07 05:40:52.972528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.966 [2024-12-07 05:40:52.972535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.966 [2024-12-07 05:40:52.972546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.966 [2024-12-07 05:40:52.972553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.966 [2024-12-07 05:40:52.972563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.966 [2024-12-07 05:40:52.972571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.966 [2024-12-07 05:40:52.972581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.966 [2024-12-07 05:40:52.972590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.966 [2024-12-07 05:40:52.972600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.966 [2024-12-07 05:40:52.972608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.966 [2024-12-07 05:40:52.972617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.966 [2024-12-07 05:40:52.972625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.966 [2024-12-07 05:40:52.972635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.966 [2024-12-07 05:40:52.972643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.966 [2024-12-07 05:40:52.972653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.966 [2024-12-07 05:40:52.972661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.966 [2024-12-07 05:40:52.972671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.966 [2024-12-07 05:40:52.972679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.966 [2024-12-07 05:40:52.972689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.966 [2024-12-07 05:40:52.972698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.966 [2024-12-07 05:40:52.972708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.966 [2024-12-07 05:40:52.972716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.966 [2024-12-07 05:40:52.972726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.966 [2024-12-07 05:40:52.972733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.966 [2024-12-07 05:40:52.972744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.966 [2024-12-07 05:40:52.972753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.966 [2024-12-07 05:40:52.972763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.966 [2024-12-07 05:40:52.972771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.966 [2024-12-07 05:40:52.972782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.966 [2024-12-07 05:40:52.972789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.966 [2024-12-07 05:40:52.972800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.966 [2024-12-07 05:40:52.972807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.966 [2024-12-07 05:40:52.972826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.966 [2024-12-07 05:40:52.972835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.966 [2024-12-07 05:40:52.972845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.966 [2024-12-07 05:40:52.972853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.966 [2024-12-07 05:40:52.972863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.966 [2024-12-07 05:40:52.972871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.966 [2024-12-07 05:40:52.972881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.966 [2024-12-07 05:40:52.972889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.966 [2024-12-07 05:40:52.972898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.966 [2024-12-07 05:40:52.972906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.966 [2024-12-07 05:40:52.972917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.966 [2024-12-07 05:40:52.972925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.966 [2024-12-07 05:40:52.972934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.966 [2024-12-07 05:40:52.972943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.966 [2024-12-07 05:40:52.972952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.966 [2024-12-07 05:40:52.972961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.966 [2024-12-07 05:40:52.972970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.967 [2024-12-07 05:40:52.972977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.967 [2024-12-07 05:40:52.972987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.967 [2024-12-07 05:40:52.972995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.967 [2024-12-07 05:40:52.973005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.967 [2024-12-07 05:40:52.973020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.967 [2024-12-07 05:40:52.973031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.967 [2024-12-07 05:40:52.973038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.967 [2024-12-07 05:40:52.973048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.967 [2024-12-07 05:40:52.973057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.967 [2024-12-07 05:40:52.973067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.967 [2024-12-07 05:40:52.973074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.967 [2024-12-07 05:40:52.973084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.967 [2024-12-07 05:40:52.973092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.967 [2024-12-07 05:40:52.973102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.967 [2024-12-07 05:40:52.973110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.967 [2024-12-07 05:40:52.973119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.967 [2024-12-07 05:40:52.973128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.967 [2024-12-07 05:40:52.973137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.967 [2024-12-07 05:40:52.973145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.967 [2024-12-07 05:40:52.973155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.967 [2024-12-07 05:40:52.973163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.967 [2024-12-07 05:40:52.973173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.967 [2024-12-07 05:40:52.973181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.967 [2024-12-07 05:40:52.973190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.967 [2024-12-07 05:40:52.973198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.967 [2024-12-07 05:40:52.973208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.967 [2024-12-07 05:40:52.973215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.967 [2024-12-07 05:40:52.973225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.967 [2024-12-07 05:40:52.973233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.967 [2024-12-07 05:40:52.973244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.967 [2024-12-07 05:40:52.973251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.967 [2024-12-07 05:40:52.973261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.967 [2024-12-07 05:40:52.973269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.967 [2024-12-07 05:40:52.973282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.967 [2024-12-07 05:40:52.973289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.967 [2024-12-07 05:40:52.973299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.967 [2024-12-07 05:40:52.973307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.967 [2024-12-07 05:40:52.973317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.967 [2024-12-07 05:40:52.973325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.967 [2024-12-07 05:40:52.973335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.967 [2024-12-07 05:40:52.973343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.967 [2024-12-07 05:40:52.973353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.967 [2024-12-07 05:40:52.973360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.967 [2024-12-07 05:40:52.973370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.967 [2024-12-07 05:40:52.973378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.967 [2024-12-07 05:40:52.973388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.967 [2024-12-07 05:40:52.973396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.967 [2024-12-07 05:40:52.973406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.968 [2024-12-07 05:40:52.973415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.968 [2024-12-07 05:40:52.973425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.968 [2024-12-07 05:40:52.973433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.968 [2024-12-07 05:40:52.973442] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf24ba0 is same with the state(5) to be set 00:25:49.968 [2024-12-07 05:40:52.974679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.968 [2024-12-07 05:40:52.974692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.968 [2024-12-07 05:40:52.974706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.968 [2024-12-07 05:40:52.974716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.968 [2024-12-07 05:40:52.974727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.968 [2024-12-07 05:40:52.974736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.968 [2024-12-07 05:40:52.974750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.968 [2024-12-07 05:40:52.974760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.968 [2024-12-07 05:40:52.974771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.968 [2024-12-07 05:40:52.974780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.968 [2024-12-07 05:40:52.974790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.968 [2024-12-07 05:40:52.974798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.968 [2024-12-07 05:40:52.974808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.968 [2024-12-07 05:40:52.974816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.968 [2024-12-07 05:40:52.974826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.968 [2024-12-07 05:40:52.974833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.968 [2024-12-07 05:40:52.974843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.968 [2024-12-07 05:40:52.974851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.968 [2024-12-07 05:40:52.974860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.968 [2024-12-07 05:40:52.974868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.968 [2024-12-07 05:40:52.974878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.968 [2024-12-07 05:40:52.974885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.968 [2024-12-07 05:40:52.974895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.968 [2024-12-07 05:40:52.974903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.968 [2024-12-07 05:40:52.974913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.968 [2024-12-07 05:40:52.974920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.968 [2024-12-07 05:40:52.974930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.968 [2024-12-07 05:40:52.974938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.968 [2024-12-07 05:40:52.974947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.968 [2024-12-07 05:40:52.974955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.968 [2024-12-07 05:40:52.974964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.968 [2024-12-07 05:40:52.974974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.968 [2024-12-07 05:40:52.974983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.968 [2024-12-07 05:40:52.974991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.968 [2024-12-07 05:40:52.975001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.968 [2024-12-07 05:40:52.975008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.968 [2024-12-07 05:40:52.975021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.968 [2024-12-07 05:40:52.975029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.968 [2024-12-07 05:40:52.975038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.968 [2024-12-07 05:40:52.975046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.968 [2024-12-07 05:40:52.975057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.968 [2024-12-07 05:40:52.975064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.968 [2024-12-07 05:40:52.975074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.968 [2024-12-07 05:40:52.975081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.968 [2024-12-07 05:40:52.975091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.968 [2024-12-07 05:40:52.975098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.968 [2024-12-07 05:40:52.975108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.968 [2024-12-07 05:40:52.975115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.968 [2024-12-07 05:40:52.975125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.968 [2024-12-07 05:40:52.975133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.968 [2024-12-07 05:40:52.975142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.968 [2024-12-07 05:40:52.975150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.968 [2024-12-07 05:40:52.975159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.968 [2024-12-07 05:40:52.975168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.968 [2024-12-07 05:40:52.975177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.968 [2024-12-07 05:40:52.975184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.968 [2024-12-07 05:40:52.975195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.968 [2024-12-07 05:40:52.975202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.968 [2024-12-07 05:40:52.975212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.969 [2024-12-07 05:40:52.975221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.969 [2024-12-07 05:40:52.975231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.969 [2024-12-07 05:40:52.975239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.969 [2024-12-07 05:40:52.975249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.969 [2024-12-07 05:40:52.975257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.969 [2024-12-07 05:40:52.975268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.969 [2024-12-07 05:40:52.975275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.969 [2024-12-07 05:40:52.975285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.969 [2024-12-07 05:40:52.975293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.969 [2024-12-07 05:40:52.975302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.969 [2024-12-07 05:40:52.975310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.969 [2024-12-07 05:40:52.975320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.969 [2024-12-07 05:40:52.975328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.969 [2024-12-07 05:40:52.975338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.969 [2024-12-07 05:40:52.975345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.969 [2024-12-07 05:40:52.975355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.969 [2024-12-07 05:40:52.975363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.969 [2024-12-07 05:40:52.975374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.969 [2024-12-07 05:40:52.975380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.969 [2024-12-07 05:40:52.975391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.969 [2024-12-07 05:40:52.975398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.969 [2024-12-07 05:40:52.975407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.969 [2024-12-07 05:40:52.975417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.969 [2024-12-07 05:40:52.975427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.969 [2024-12-07 05:40:52.975435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.969 [2024-12-07 05:40:52.975444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.969 [2024-12-07 05:40:52.975452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.969 [2024-12-07 05:40:52.975461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.969 [2024-12-07 05:40:52.975469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.969 [2024-12-07 05:40:52.975478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.969 [2024-12-07 05:40:52.975486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.969 [2024-12-07 05:40:52.975496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.969 [2024-12-07 05:40:52.975504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.969 [2024-12-07 05:40:52.975514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.969 [2024-12-07 05:40:52.975521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.969 [2024-12-07 05:40:52.975531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.969 [2024-12-07 05:40:52.975539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.969 [2024-12-07 05:40:52.975548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.969 [2024-12-07 05:40:52.975556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.969 [2024-12-07 05:40:52.975566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.969 [2024-12-07 05:40:52.975573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.969 [2024-12-07 05:40:52.975583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.969 [2024-12-07 05:40:52.975591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.969 [2024-12-07 05:40:52.975601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.969 [2024-12-07 05:40:52.975609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.969 [2024-12-07 05:40:52.975618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.969 [2024-12-07 05:40:52.975626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.969 [2024-12-07 05:40:52.975638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.969 [2024-12-07 05:40:52.975646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.969 [2024-12-07 05:40:52.975657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.969 [2024-12-07 05:40:52.975665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.969 [2024-12-07 05:40:52.975674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.969 [2024-12-07 05:40:52.975682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.969 [2024-12-07 05:40:52.975691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.969 [2024-12-07 05:40:52.975698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.969 [2024-12-07 05:40:52.975708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.969 [2024-12-07 05:40:52.975716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.969 [2024-12-07 05:40:52.975726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.969 [2024-12-07 05:40:52.975733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.969 [2024-12-07 05:40:52.975742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.969 [2024-12-07 05:40:52.975750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.969 [2024-12-07 05:40:52.975760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.970 [2024-12-07 05:40:52.975767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.970 [2024-12-07 05:40:52.975777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.970 [2024-12-07 05:40:52.975784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.970 [2024-12-07 05:40:52.975794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.970 [2024-12-07 05:40:52.975802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.970 [2024-12-07 05:40:52.975812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.970 [2024-12-07 05:40:52.975819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.970 [2024-12-07 05:40:52.975828] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c7880 is same with the state(5) to be set 00:25:49.970 [2024-12-07 05:40:52.977075] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:49.970 [2024-12-07 05:40:52.977093] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.970 [2024-12-07 05:40:52.977102] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:49.970 [2024-12-07 05:40:52.977117] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:49.970 [2024-12-07 05:40:52.977488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.970 [2024-12-07 05:40:52.977813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.970 [2024-12-07 05:40:52.977825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b27f0 with addr=10.0.0.2, port=4420 00:25:49.970 [2024-12-07 05:40:52.977834] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b27f0 is same with the state(5) to be set 00:25:49.970 [2024-12-07 05:40:52.977843] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:49.970 [2024-12-07 05:40:52.977850] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:49.970 [2024-12-07 05:40:52.977858] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:49.970 [2024-12-07 05:40:52.977902] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:49.970 [2024-12-07 05:40:52.978222] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:49.970 [2024-12-07 05:40:52.978237] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.970 [2024-12-07 05:40:52.978584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.970 [2024-12-07 05:40:52.978768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.970 [2024-12-07 05:40:52.978779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fefe0 with addr=10.0.0.2, port=4420 00:25:49.970 [2024-12-07 05:40:52.978787] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fefe0 is same with the state(5) to be set 00:25:49.970 [2024-12-07 05:40:52.979141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.970 [2024-12-07 05:40:52.979321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.970 [2024-12-07 05:40:52.979332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5a4630 with addr=10.0.0.2, port=4420 00:25:49.970 [2024-12-07 05:40:52.979339] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a4630 is same with the state(5) to be set 00:25:49.970 [2024-12-07 05:40:52.979676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.970 [2024-12-07 05:40:52.979878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.970 [2024-12-07 05:40:52.979889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5abfd0 with addr=10.0.0.2, port=4420 00:25:49.970 [2024-12-07 05:40:52.979897] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5abfd0 is same with the state(5) to be set 00:25:49.970 [2024-12-07 05:40:52.979907] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b27f0 (9): Bad file descriptor 00:25:49.970 [2024-12-07 05:40:52.980717] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.970 [2024-12-07 05:40:52.980732] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:49.970 [2024-12-07 05:40:52.980740] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:25:49.970 [2024-12-07 05:40:52.981118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.970 [2024-12-07 05:40:52.981476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.970 [2024-12-07 05:40:52.981488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x746c60 with addr=10.0.0.2, port=4420 00:25:49.970 [2024-12-07 05:40:52.981496] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x746c60 is same with the state(5) to be set 00:25:49.970 [2024-12-07 05:40:52.981505] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5fefe0 (9): Bad file descriptor 00:25:49.970 [2024-12-07 05:40:52.981520] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5a4630 (9): Bad file descriptor 00:25:49.970 [2024-12-07 05:40:52.981529] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5abfd0 (9): Bad file descriptor 00:25:49.970 [2024-12-07 05:40:52.981538] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:49.970 [2024-12-07 05:40:52.981545] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:49.970 [2024-12-07 05:40:52.981551] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:49.970 [2024-12-07 05:40:52.981565] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:49.970 [2024-12-07 05:40:52.981617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.970 [2024-12-07 05:40:52.981627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.970 [2024-12-07 05:40:52.981639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.970 [2024-12-07 05:40:52.981647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.970 [2024-12-07 05:40:52.981657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.970 [2024-12-07 05:40:52.981665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.970 [2024-12-07 05:40:52.981675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.970 [2024-12-07 05:40:52.981682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.970 [2024-12-07 05:40:52.981692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.970 [2024-12-07 05:40:52.981699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.970 [2024-12-07 05:40:52.981709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.970 [2024-12-07 05:40:52.981716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.970 [2024-12-07 05:40:52.981726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.970 [2024-12-07 05:40:52.981734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.970 [2024-12-07 05:40:52.981745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.970 [2024-12-07 05:40:52.981752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.970 [2024-12-07 05:40:52.981761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.970 [2024-12-07 05:40:52.981769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.971 [2024-12-07 05:40:52.981779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.971 [2024-12-07 05:40:52.981786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.971 [2024-12-07 05:40:52.981799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.971 [2024-12-07 05:40:52.981807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.971 [2024-12-07 05:40:52.981817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.971 [2024-12-07 05:40:52.981824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.971 [2024-12-07 05:40:52.981834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.971 [2024-12-07 05:40:52.981841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.971 [2024-12-07 05:40:52.981851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.971 [2024-12-07 05:40:52.981859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.971 [2024-12-07 05:40:52.981869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.971 [2024-12-07 05:40:52.981876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.971 [2024-12-07 05:40:52.981886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.971 [2024-12-07 05:40:52.981893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.971 [2024-12-07 05:40:52.981902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.971 [2024-12-07 05:40:52.981909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.971 [2024-12-07 05:40:52.981920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.971 [2024-12-07 05:40:52.981928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.971 [2024-12-07 05:40:52.981938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.971 [2024-12-07 05:40:52.981945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.971 [2024-12-07 05:40:52.981955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.971 [2024-12-07 05:40:52.981962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.971 [2024-12-07 05:40:52.981972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.971 [2024-12-07 05:40:52.981979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.971 [2024-12-07 05:40:52.981989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.971 [2024-12-07 05:40:52.981996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.971 [2024-12-07 05:40:52.982005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.971 [2024-12-07 05:40:52.982019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.971 [2024-12-07 05:40:52.982029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.971 [2024-12-07 05:40:52.982037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.971 [2024-12-07 05:40:52.982047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.971 [2024-12-07 05:40:52.982055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.971 [2024-12-07 05:40:52.982065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.971 [2024-12-07 05:40:52.982073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.971 [2024-12-07 05:40:52.982082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.971 [2024-12-07 05:40:52.982089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.971 [2024-12-07 05:40:52.982099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.971 [2024-12-07 05:40:52.982107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.971 [2024-12-07 05:40:52.982117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.971 [2024-12-07 05:40:52.982126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.971 [2024-12-07 05:40:52.982136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.971 [2024-12-07 05:40:52.982143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.971 [2024-12-07 05:40:52.982153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.971 [2024-12-07 05:40:52.982160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.971 [2024-12-07 05:40:52.982169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.971 [2024-12-07 05:40:52.982178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.971 [2024-12-07 05:40:52.982187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.971 [2024-12-07 05:40:52.982195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.971 [2024-12-07 05:40:52.982205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.971 [2024-12-07 05:40:52.982212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.971 [2024-12-07 05:40:52.982222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.971 [2024-12-07 05:40:52.982229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.971 [2024-12-07 05:40:52.982244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.971 [2024-12-07 05:40:52.982251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.971 [2024-12-07 05:40:52.982261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.971 [2024-12-07 05:40:52.982268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.971 [2024-12-07 05:40:52.982277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.971 [2024-12-07 05:40:52.982284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.971 [2024-12-07 05:40:52.982294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.971 [2024-12-07 05:40:52.982302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.971 [2024-12-07 05:40:52.982311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.971 [2024-12-07 05:40:52.982318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.971 [2024-12-07 05:40:52.982327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.972 [2024-12-07 05:40:52.982334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.972 [2024-12-07 05:40:52.982344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.972 [2024-12-07 05:40:52.982351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.972 [2024-12-07 05:40:52.982361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.972 [2024-12-07 05:40:52.982368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.972 [2024-12-07 05:40:52.982378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.972 [2024-12-07 05:40:52.982385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.972 [2024-12-07 05:40:52.982394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.972 [2024-12-07 05:40:52.982402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.972 [2024-12-07 05:40:52.982412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.972 [2024-12-07 05:40:52.982419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.972 [2024-12-07 05:40:52.982428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.972 [2024-12-07 05:40:52.982436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.972 [2024-12-07 05:40:52.982445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.972 [2024-12-07 05:40:52.982454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.972 [2024-12-07 05:40:52.982464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.972 [2024-12-07 05:40:52.982471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.972 [2024-12-07 05:40:52.982481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.972 [2024-12-07 05:40:52.982489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.972 [2024-12-07 05:40:52.982499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.972 [2024-12-07 05:40:52.982506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.972 [2024-12-07 05:40:52.982516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.972 [2024-12-07 05:40:52.982524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.972 [2024-12-07 05:40:52.982533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.972 [2024-12-07 05:40:52.982540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.972 [2024-12-07 05:40:52.982550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.972 [2024-12-07 05:40:52.982557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.972 [2024-12-07 05:40:52.982568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.972 [2024-12-07 05:40:52.982576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.972 [2024-12-07 05:40:52.982585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.972 [2024-12-07 05:40:52.982592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.972 [2024-12-07 05:40:52.982601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.972 [2024-12-07 05:40:52.982608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.972 [2024-12-07 05:40:52.982618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.972 [2024-12-07 05:40:52.982626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.972 [2024-12-07 05:40:52.982635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.972 [2024-12-07 05:40:52.982642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.972 [2024-12-07 05:40:52.982652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.972 [2024-12-07 05:40:52.982659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.972 [2024-12-07 05:40:52.982670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.972 [2024-12-07 05:40:52.982678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.972 [2024-12-07 05:40:52.982687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.972 [2024-12-07 05:40:52.982694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.972 [2024-12-07 05:40:52.982704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.972 [2024-12-07 05:40:52.982711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.972 [2024-12-07 05:40:52.982720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.972 [2024-12-07 05:40:52.982729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.972 [2024-12-07 05:40:52.982737] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d83e0 is same with the state(5) to be set 00:25:49.972 [2024-12-07 05:40:52.984493] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.972 task offset: 35200 on job bdev=Nvme10n1 fails 00:25:49.972 00:25:49.972 Latency(us) 00:25:49.972 [2024-12-07T04:40:53.212Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:49.972 [2024-12-07T04:40:53.212Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.972 [2024-12-07T04:40:53.212Z] Job: Nvme1n1 ended in about 0.72 seconds with error 00:25:49.972 Verification LBA range: start 0x0 length 0x400 00:25:49.972 Nvme1n1 : 0.72 349.05 21.82 89.00 0.00 145022.85 48496.64 158160.21 00:25:49.972 [2024-12-07T04:40:53.212Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.972 [2024-12-07T04:40:53.212Z] Job: Nvme2n1 ended in about 0.73 seconds with error 00:25:49.972 Verification LBA range: start 0x0 length 0x400 00:25:49.972 Nvme2n1 : 0.73 342.83 21.43 87.42 0.00 146058.59 72526.51 124081.49 00:25:49.972 [2024-12-07T04:40:53.212Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.972 [2024-12-07T04:40:53.212Z] Job: Nvme3n1 ended in about 0.73 seconds with error 00:25:49.972 Verification LBA range: start 0x0 length 0x400 00:25:49.972 Nvme3n1 : 0.73 399.82 24.99 39.98 0.00 140762.61 4778.67 121460.05 00:25:49.972 [2024-12-07T04:40:53.212Z] Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.972 [2024-12-07T04:40:53.212Z] Job: Nvme4n1 ended in about 0.72 seconds with error 00:25:49.972 Verification LBA range: start 0x0 length 0x400 00:25:49.973 Nvme4n1 : 0.72 406.53 25.41 89.41 0.00 123793.66 18568.53 123207.68 00:25:49.973 [2024-12-07T04:40:53.213Z] Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.973 [2024-12-07T04:40:53.213Z] Job: Nvme5n1 ended in about 0.73 seconds with error 00:25:49.973 Verification LBA range: start 0x0 length 0x400 00:25:49.973 Nvme5n1 : 0.73 341.70 21.36 87.13 0.00 141708.25 72526.51 114469.55 00:25:49.973 [2024-12-07T04:40:53.213Z] Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.973 [2024-12-07T04:40:53.213Z] Job: Nvme6n1 ended in about 0.74 seconds with error 00:25:49.973 Verification LBA range: start 0x0 length 0x400 00:25:49.973 Nvme6n1 : 0.74 340.60 21.29 86.85 0.00 140560.50 85633.71 119712.43 00:25:49.973 [2024-12-07T04:40:53.213Z] Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.973 [2024-12-07T04:40:53.213Z] Job: Nvme7n1 ended in about 0.73 seconds with error 00:25:49.973 Verification LBA range: start 0x0 length 0x400 00:25:49.973 Nvme7n1 : 0.73 394.70 24.67 37.00 0.00 137126.49 2607.79 116217.17 00:25:49.973 [2024-12-07T04:40:53.213Z] Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.973 [2024-12-07T04:40:53.213Z] Job: Nvme8n1 ended in about 0.72 seconds with error 00:25:49.973 Verification LBA range: start 0x0 length 0x400 00:25:49.973 Nvme8n1 : 0.72 403.57 25.22 33.28 0.00 131592.13 19770.03 109226.67 00:25:49.973 [2024-12-07T04:40:53.213Z] Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.973 [2024-12-07T04:40:53.213Z] Job: Nvme9n1 ended in about 0.74 seconds with error 00:25:49.973 Verification LBA range: start 0x0 length 0x400 00:25:49.973 Nvme9n1 : 0.74 337.44 21.09 86.04 0.00 137073.57 84759.89 122333.87 00:25:49.973 [2024-12-07T04:40:53.213Z] Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.973 [2024-12-07T04:40:53.213Z] Job: Nvme10n1 ended in about 0.70 seconds with error 00:25:49.973 Verification LBA range: start 0x0 length 0x400 00:25:49.973 Nvme10n1 : 0.70 364.07 22.75 91.73 0.00 124871.24 8901.97 112721.92 00:25:49.973 [2024-12-07T04:40:53.213Z] =================================================================================================================== 00:25:49.973 [2024-12-07T04:40:53.213Z] Total : 3680.32 230.02 727.84 0.00 136687.18 2607.79 158160.21 00:25:49.973 [2024-12-07 05:40:53.011752] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:49.973 [2024-12-07 05:40:53.011801] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:25:49.973 [2024-12-07 05:40:53.012214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.973 [2024-12-07 05:40:53.012569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.973 [2024-12-07 05:40:53.012582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5806f0 with addr=10.0.0.2, port=4420 00:25:49.973 [2024-12-07 05:40:53.012593] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5806f0 is same with the state(5) to be set 00:25:49.973 [2024-12-07 05:40:53.012925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.973 [2024-12-07 05:40:53.013281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.973 [2024-12-07 05:40:53.013293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5a3de0 with addr=10.0.0.2, port=4420 00:25:49.973 [2024-12-07 05:40:53.013301] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a3de0 is same with the state(5) to be set 00:25:49.973 [2024-12-07 05:40:53.013618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.973 [2024-12-07 05:40:53.013823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.973 [2024-12-07 05:40:53.013834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613630 with addr=10.0.0.2, port=4420 00:25:49.973 [2024-12-07 05:40:53.013842] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613630 is same with the state(5) to be set 00:25:49.973 [2024-12-07 05:40:53.013855] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x746c60 (9): Bad file descriptor 00:25:49.973 [2024-12-07 05:40:53.013867] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:49.973 [2024-12-07 05:40:53.013874] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:49.973 [2024-12-07 05:40:53.013882] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:49.973 [2024-12-07 05:40:53.013898] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:49.973 [2024-12-07 05:40:53.013905] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:49.973 [2024-12-07 05:40:53.013912] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:49.973 [2024-12-07 05:40:53.013923] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:49.973 [2024-12-07 05:40:53.013929] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:49.973 [2024-12-07 05:40:53.013938] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:49.973 [2024-12-07 05:40:53.014054] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.973 [2024-12-07 05:40:53.014066] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.973 [2024-12-07 05:40:53.014073] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.973 [2024-12-07 05:40:53.014196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.973 [2024-12-07 05:40:53.014553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.973 [2024-12-07 05:40:53.014566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613150 with addr=10.0.0.2, port=4420 00:25:49.973 [2024-12-07 05:40:53.014574] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613150 is same with the state(5) to be set 00:25:49.973 [2024-12-07 05:40:53.014584] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5806f0 (9): Bad file descriptor 00:25:49.973 [2024-12-07 05:40:53.014594] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5a3de0 (9): Bad file descriptor 00:25:49.973 [2024-12-07 05:40:53.014604] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613630 (9): Bad file descriptor 00:25:49.973 [2024-12-07 05:40:53.014612] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:49.973 [2024-12-07 05:40:53.014619] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:49.973 [2024-12-07 05:40:53.014626] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:49.974 [2024-12-07 05:40:53.014669] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:49.974 [2024-12-07 05:40:53.014697] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:49.974 [2024-12-07 05:40:53.014708] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:49.974 [2024-12-07 05:40:53.014719] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:49.974 [2024-12-07 05:40:53.015025] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.974 [2024-12-07 05:40:53.015055] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613150 (9): Bad file descriptor 00:25:49.974 [2024-12-07 05:40:53.015065] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.974 [2024-12-07 05:40:53.015072] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.974 [2024-12-07 05:40:53.015080] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.974 [2024-12-07 05:40:53.015091] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:49.974 [2024-12-07 05:40:53.015098] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:49.974 [2024-12-07 05:40:53.015106] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:49.974 [2024-12-07 05:40:53.015116] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:25:49.974 [2024-12-07 05:40:53.015124] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:25:49.974 [2024-12-07 05:40:53.015131] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:49.974 [2024-12-07 05:40:53.015176] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:49.974 [2024-12-07 05:40:53.015189] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:49.974 [2024-12-07 05:40:53.015199] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:49.974 [2024-12-07 05:40:53.015212] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:49.974 [2024-12-07 05:40:53.015220] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:49.974 [2024-12-07 05:40:53.015231] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.974 [2024-12-07 05:40:53.015237] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.974 [2024-12-07 05:40:53.015271] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:25:49.974 [2024-12-07 05:40:53.015279] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:25:49.974 [2024-12-07 05:40:53.015287] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:49.974 [2024-12-07 05:40:53.015315] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.974 [2024-12-07 05:40:53.015331] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.974 [2024-12-07 05:40:53.015689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.974 [2024-12-07 05:40:53.016067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.974 [2024-12-07 05:40:53.016080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x583450 with addr=10.0.0.2, port=4420 00:25:49.974 [2024-12-07 05:40:53.016088] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x583450 is same with the state(5) to be set 00:25:49.974 [2024-12-07 05:40:53.016326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.974 [2024-12-07 05:40:53.016692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.974 [2024-12-07 05:40:53.016703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5abfd0 with addr=10.0.0.2, port=4420 00:25:49.974 [2024-12-07 05:40:53.016711] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5abfd0 is same with the state(5) to be set 00:25:49.974 [2024-12-07 05:40:53.017045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.974 [2024-12-07 05:40:53.017425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.974 [2024-12-07 05:40:53.017436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5a4630 with addr=10.0.0.2, port=4420 00:25:49.974 [2024-12-07 05:40:53.017444] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a4630 is same with the state(5) to be set 00:25:49.974 [2024-12-07 05:40:53.017796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.974 [2024-12-07 05:40:53.018106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.974 [2024-12-07 05:40:53.018118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fefe0 with addr=10.0.0.2, port=4420 00:25:49.974 [2024-12-07 05:40:53.018126] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fefe0 is same with the state(5) to be set 00:25:49.974 [2024-12-07 05:40:53.018179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.974 [2024-12-07 05:40:53.018500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.974 [2024-12-07 05:40:53.018510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b27f0 with addr=10.0.0.2, port=4420 00:25:49.974 [2024-12-07 05:40:53.018518] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b27f0 is same with the state(5) to be set 00:25:49.974 [2024-12-07 05:40:53.018547] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x583450 (9): Bad file descriptor 00:25:49.974 [2024-12-07 05:40:53.018557] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5abfd0 (9): Bad file descriptor 00:25:49.974 [2024-12-07 05:40:53.018566] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5a4630 (9): Bad file descriptor 00:25:49.974 [2024-12-07 05:40:53.018579] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5fefe0 (9): Bad file descriptor 00:25:49.974 [2024-12-07 05:40:53.018589] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b27f0 (9): Bad file descriptor 00:25:49.974 [2024-12-07 05:40:53.018627] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:49.974 [2024-12-07 05:40:53.018636] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:49.974 [2024-12-07 05:40:53.018644] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:49.974 [2024-12-07 05:40:53.018653] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:49.974 [2024-12-07 05:40:53.018660] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:49.974 [2024-12-07 05:40:53.018667] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:49.974 [2024-12-07 05:40:53.018677] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:49.974 [2024-12-07 05:40:53.018684] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:49.974 [2024-12-07 05:40:53.018691] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:49.974 [2024-12-07 05:40:53.018700] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:49.974 [2024-12-07 05:40:53.018706] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:49.974 [2024-12-07 05:40:53.018714] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:49.974 [2024-12-07 05:40:53.018724] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:49.974 [2024-12-07 05:40:53.018731] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:49.974 [2024-12-07 05:40:53.018738] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:49.974 [2024-12-07 05:40:53.018768] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.974 [2024-12-07 05:40:53.018777] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.974 [2024-12-07 05:40:53.018784] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.974 [2024-12-07 05:40:53.018791] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.974 [2024-12-07 05:40:53.018797] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.312 05:40:53 -- target/shutdown.sh@135 -- # nvmfpid= 00:25:50.312 05:40:53 -- target/shutdown.sh@138 -- # sleep 1 00:25:51.287 05:40:54 -- target/shutdown.sh@141 -- # kill -9 1937564 00:25:51.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (1937564) - No such process 00:25:51.287 05:40:54 -- target/shutdown.sh@141 -- # true 00:25:51.287 05:40:54 -- target/shutdown.sh@143 -- # stoptarget 00:25:51.287 05:40:54 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:51.287 05:40:54 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:51.287 05:40:54 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:51.287 05:40:54 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:51.287 05:40:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:51.287 05:40:54 -- nvmf/common.sh@116 -- # sync 00:25:51.287 05:40:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:51.287 05:40:54 -- nvmf/common.sh@119 -- # set +e 00:25:51.287 05:40:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:51.287 05:40:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:51.287 rmmod nvme_tcp 00:25:51.287 rmmod nvme_fabrics 00:25:51.287 rmmod nvme_keyring 00:25:51.287 05:40:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:51.287 05:40:54 -- nvmf/common.sh@123 -- # set -e 00:25:51.287 05:40:54 -- nvmf/common.sh@124 -- # return 0 00:25:51.287 05:40:54 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:25:51.287 05:40:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:51.287 05:40:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:51.287 05:40:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:51.287 05:40:54 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:51.287 05:40:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:51.287 05:40:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.287 05:40:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:51.287 05:40:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.203 05:40:56 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:53.203 00:25:53.203 real 0m7.527s 00:25:53.203 user 0m17.920s 00:25:53.203 sys 0m1.207s 00:25:53.203 05:40:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:53.204 05:40:56 -- common/autotest_common.sh@10 -- # set +x 00:25:53.204 ************************************ 00:25:53.204 END TEST nvmf_shutdown_tc3 00:25:53.204 ************************************ 00:25:53.204 05:40:56 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:25:53.204 00:25:53.204 real 0m32.036s 00:25:53.204 user 1m13.202s 00:25:53.204 sys 0m9.440s 00:25:53.204 05:40:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:53.204 05:40:56 -- common/autotest_common.sh@10 -- # set +x 00:25:53.204 ************************************ 00:25:53.204 END TEST nvmf_shutdown 00:25:53.204 ************************************ 00:25:53.466 05:40:56 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:25:53.466 05:40:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:53.466 05:40:56 -- common/autotest_common.sh@10 -- # set +x 00:25:53.466 05:40:56 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:25:53.466 05:40:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:53.466 05:40:56 -- common/autotest_common.sh@10 -- # set +x 00:25:53.466 05:40:56 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:25:53.466 05:40:56 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:53.466 05:40:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:53.466 05:40:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:53.466 05:40:56 -- common/autotest_common.sh@10 -- # set +x 00:25:53.466 ************************************ 00:25:53.466 START TEST nvmf_multicontroller 00:25:53.466 ************************************ 00:25:53.466 05:40:56 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:53.466 * Looking for test storage... 00:25:53.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:53.466 05:40:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:53.466 05:40:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:53.466 05:40:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:53.466 05:40:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:53.466 05:40:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:53.466 05:40:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:53.466 05:40:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:53.466 05:40:56 -- scripts/common.sh@335 -- # IFS=.-: 00:25:53.466 05:40:56 -- scripts/common.sh@335 -- # read -ra ver1 00:25:53.466 05:40:56 -- scripts/common.sh@336 -- # IFS=.-: 00:25:53.466 05:40:56 -- scripts/common.sh@336 -- # read -ra ver2 00:25:53.466 05:40:56 -- scripts/common.sh@337 -- # local 'op=<' 00:25:53.466 05:40:56 -- scripts/common.sh@339 -- # ver1_l=2 00:25:53.466 05:40:56 -- scripts/common.sh@340 -- # ver2_l=1 00:25:53.466 05:40:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:53.466 05:40:56 -- scripts/common.sh@343 -- # case "$op" in 00:25:53.466 05:40:56 -- scripts/common.sh@344 -- # : 1 00:25:53.466 05:40:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:53.466 05:40:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:53.466 05:40:56 -- scripts/common.sh@364 -- # decimal 1 00:25:53.466 05:40:56 -- scripts/common.sh@352 -- # local d=1 00:25:53.466 05:40:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:53.466 05:40:56 -- scripts/common.sh@354 -- # echo 1 00:25:53.466 05:40:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:53.466 05:40:56 -- scripts/common.sh@365 -- # decimal 2 00:25:53.466 05:40:56 -- scripts/common.sh@352 -- # local d=2 00:25:53.466 05:40:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:53.466 05:40:56 -- scripts/common.sh@354 -- # echo 2 00:25:53.466 05:40:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:53.466 05:40:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:53.466 05:40:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:53.466 05:40:56 -- scripts/common.sh@367 -- # return 0 00:25:53.466 05:40:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:53.466 05:40:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:53.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.466 --rc genhtml_branch_coverage=1 00:25:53.466 --rc genhtml_function_coverage=1 00:25:53.466 --rc genhtml_legend=1 00:25:53.466 --rc geninfo_all_blocks=1 00:25:53.466 --rc geninfo_unexecuted_blocks=1 00:25:53.466 00:25:53.466 ' 00:25:53.466 05:40:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:53.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.466 --rc genhtml_branch_coverage=1 00:25:53.466 --rc genhtml_function_coverage=1 00:25:53.466 --rc genhtml_legend=1 00:25:53.466 --rc geninfo_all_blocks=1 00:25:53.466 --rc geninfo_unexecuted_blocks=1 00:25:53.466 00:25:53.466 ' 00:25:53.466 05:40:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:53.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.466 --rc genhtml_branch_coverage=1 00:25:53.466 --rc genhtml_function_coverage=1 00:25:53.466 --rc genhtml_legend=1 00:25:53.466 --rc geninfo_all_blocks=1 00:25:53.466 --rc geninfo_unexecuted_blocks=1 00:25:53.466 00:25:53.466 ' 00:25:53.466 05:40:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:53.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.466 --rc genhtml_branch_coverage=1 00:25:53.466 --rc genhtml_function_coverage=1 00:25:53.466 --rc genhtml_legend=1 00:25:53.466 --rc geninfo_all_blocks=1 00:25:53.466 --rc geninfo_unexecuted_blocks=1 00:25:53.466 00:25:53.466 ' 00:25:53.466 05:40:56 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:53.466 05:40:56 -- nvmf/common.sh@7 -- # uname -s 00:25:53.728 05:40:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:53.728 05:40:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:53.728 05:40:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:53.728 05:40:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:53.728 05:40:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:53.728 05:40:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:53.728 05:40:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:53.728 05:40:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:53.728 05:40:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:53.728 05:40:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:53.728 05:40:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:53.728 05:40:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:53.728 05:40:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:53.728 05:40:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:53.728 05:40:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:53.728 05:40:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:53.728 05:40:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:53.728 05:40:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:53.728 05:40:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:53.728 05:40:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.728 05:40:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.729 05:40:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.729 05:40:56 -- paths/export.sh@5 -- # export PATH 00:25:53.729 05:40:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.729 05:40:56 -- nvmf/common.sh@46 -- # : 0 00:25:53.729 05:40:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:53.729 05:40:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:53.729 05:40:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:53.729 05:40:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:53.729 05:40:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:53.729 05:40:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:53.729 05:40:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:53.729 05:40:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:53.729 05:40:56 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:53.729 05:40:56 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:53.729 05:40:56 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:53.729 05:40:56 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:53.729 05:40:56 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:53.729 05:40:56 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:25:53.729 05:40:56 -- host/multicontroller.sh@23 -- # nvmftestinit 00:25:53.729 05:40:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:53.729 05:40:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:53.729 05:40:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:53.729 05:40:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:53.729 05:40:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:53.729 05:40:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.729 05:40:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:53.729 05:40:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.729 05:40:56 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:53.729 05:40:56 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:53.729 05:40:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:53.729 05:40:56 -- common/autotest_common.sh@10 -- # set +x 00:26:01.875 05:41:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:01.875 05:41:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:01.875 05:41:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:01.875 05:41:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:01.875 05:41:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:01.875 05:41:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:01.875 05:41:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:01.875 05:41:03 -- nvmf/common.sh@294 -- # net_devs=() 00:26:01.875 05:41:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:01.875 05:41:03 -- nvmf/common.sh@295 -- # e810=() 00:26:01.875 05:41:03 -- nvmf/common.sh@295 -- # local -ga e810 00:26:01.875 05:41:03 -- nvmf/common.sh@296 -- # x722=() 00:26:01.875 05:41:03 -- nvmf/common.sh@296 -- # local -ga x722 00:26:01.875 05:41:03 -- nvmf/common.sh@297 -- # mlx=() 00:26:01.875 05:41:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:01.875 05:41:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:01.875 05:41:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:01.875 05:41:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:01.875 05:41:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:01.875 05:41:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:01.875 05:41:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:01.875 05:41:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:01.875 05:41:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:01.875 05:41:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:01.875 05:41:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:01.875 05:41:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:01.875 05:41:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:01.875 05:41:03 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:01.875 05:41:03 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:01.875 05:41:03 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:01.875 05:41:03 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:01.875 05:41:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:01.875 05:41:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:01.875 05:41:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:01.875 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:01.875 05:41:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:01.875 05:41:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:01.875 05:41:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:01.875 05:41:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:01.875 05:41:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:01.875 05:41:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:01.875 05:41:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:01.876 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:01.876 05:41:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:01.876 05:41:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:01.876 05:41:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:01.876 05:41:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:01.876 05:41:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:01.876 05:41:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:01.876 05:41:03 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:01.876 05:41:03 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:01.876 05:41:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:01.876 05:41:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:01.876 05:41:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:01.876 05:41:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:01.876 05:41:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:01.876 Found net devices under 0000:31:00.0: cvl_0_0 00:26:01.876 05:41:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:01.876 05:41:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:01.876 05:41:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:01.876 05:41:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:01.876 05:41:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:01.876 05:41:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:01.876 Found net devices under 0000:31:00.1: cvl_0_1 00:26:01.876 05:41:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:01.876 05:41:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:01.876 05:41:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:01.876 05:41:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:01.876 05:41:03 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:01.876 05:41:03 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:01.876 05:41:03 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:01.876 05:41:03 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:01.876 05:41:03 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:01.876 05:41:03 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:01.876 05:41:03 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:01.876 05:41:03 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:01.876 05:41:03 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:01.876 05:41:03 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:01.876 05:41:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:01.876 05:41:03 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:01.876 05:41:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:01.876 05:41:03 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:01.876 05:41:03 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:01.876 05:41:03 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:01.876 05:41:03 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:01.876 05:41:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:01.876 05:41:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:01.876 05:41:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:01.876 05:41:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:01.876 05:41:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:01.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:01.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:26:01.876 00:26:01.876 --- 10.0.0.2 ping statistics --- 00:26:01.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.876 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:26:01.876 05:41:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:01.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:01.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:26:01.876 00:26:01.876 --- 10.0.0.1 ping statistics --- 00:26:01.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.876 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:26:01.876 05:41:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:01.876 05:41:04 -- nvmf/common.sh@410 -- # return 0 00:26:01.876 05:41:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:01.876 05:41:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:01.876 05:41:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:01.876 05:41:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:01.876 05:41:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:01.876 05:41:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:01.876 05:41:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:01.876 05:41:04 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:26:01.876 05:41:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:01.876 05:41:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:01.876 05:41:04 -- common/autotest_common.sh@10 -- # set +x 00:26:01.876 05:41:04 -- nvmf/common.sh@469 -- # nvmfpid=1942473 00:26:01.876 05:41:04 -- nvmf/common.sh@470 -- # waitforlisten 1942473 00:26:01.876 05:41:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:01.876 05:41:04 -- common/autotest_common.sh@829 -- # '[' -z 1942473 ']' 00:26:01.876 05:41:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:01.876 05:41:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:01.876 05:41:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:01.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:01.876 05:41:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:01.876 05:41:04 -- common/autotest_common.sh@10 -- # set +x 00:26:01.876 [2024-12-07 05:41:04.218440] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:01.876 [2024-12-07 05:41:04.218499] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:01.876 EAL: No free 2048 kB hugepages reported on node 1 00:26:01.876 [2024-12-07 05:41:04.303264] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:01.876 [2024-12-07 05:41:04.395002] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:01.876 [2024-12-07 05:41:04.395175] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:01.876 [2024-12-07 05:41:04.395186] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:01.876 [2024-12-07 05:41:04.395196] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:01.876 [2024-12-07 05:41:04.395358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:01.876 [2024-12-07 05:41:04.395562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:01.876 [2024-12-07 05:41:04.395563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:01.876 05:41:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:01.876 05:41:04 -- common/autotest_common.sh@862 -- # return 0 00:26:01.876 05:41:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:01.876 05:41:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:01.876 05:41:04 -- common/autotest_common.sh@10 -- # set +x 00:26:01.876 05:41:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:01.876 05:41:05 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:01.876 05:41:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.876 05:41:05 -- common/autotest_common.sh@10 -- # set +x 00:26:01.876 [2024-12-07 05:41:05.032990] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:01.876 05:41:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.876 05:41:05 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:01.876 05:41:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.876 05:41:05 -- common/autotest_common.sh@10 -- # set +x 00:26:01.876 Malloc0 00:26:01.876 05:41:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.876 05:41:05 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:01.876 05:41:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.876 05:41:05 -- common/autotest_common.sh@10 -- # set +x 00:26:01.876 05:41:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.876 05:41:05 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:01.876 05:41:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.876 05:41:05 -- common/autotest_common.sh@10 -- # set +x 00:26:01.876 05:41:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.876 05:41:05 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:01.876 05:41:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.876 05:41:05 -- common/autotest_common.sh@10 -- # set +x 00:26:01.876 [2024-12-07 05:41:05.101413] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:01.876 05:41:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.876 05:41:05 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:01.876 05:41:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.876 05:41:05 -- common/autotest_common.sh@10 -- # set +x 00:26:02.137 [2024-12-07 05:41:05.113361] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:02.137 05:41:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.137 05:41:05 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:02.137 05:41:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.137 05:41:05 -- common/autotest_common.sh@10 -- # set +x 00:26:02.137 Malloc1 00:26:02.137 05:41:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.137 05:41:05 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:26:02.137 05:41:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.137 05:41:05 -- common/autotest_common.sh@10 -- # set +x 00:26:02.137 05:41:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.137 05:41:05 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:26:02.137 05:41:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.137 05:41:05 -- common/autotest_common.sh@10 -- # set +x 00:26:02.137 05:41:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.137 05:41:05 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:02.137 05:41:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.137 05:41:05 -- common/autotest_common.sh@10 -- # set +x 00:26:02.137 05:41:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.137 05:41:05 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:26:02.137 05:41:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.137 05:41:05 -- common/autotest_common.sh@10 -- # set +x 00:26:02.137 05:41:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.137 05:41:05 -- host/multicontroller.sh@44 -- # bdevperf_pid=1942822 00:26:02.137 05:41:05 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:02.137 05:41:05 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:26:02.137 05:41:05 -- host/multicontroller.sh@47 -- # waitforlisten 1942822 /var/tmp/bdevperf.sock 00:26:02.137 05:41:05 -- common/autotest_common.sh@829 -- # '[' -z 1942822 ']' 00:26:02.137 05:41:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:02.137 05:41:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:02.137 05:41:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:02.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:02.137 05:41:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:02.137 05:41:05 -- common/autotest_common.sh@10 -- # set +x 00:26:03.095 05:41:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:03.095 05:41:06 -- common/autotest_common.sh@862 -- # return 0 00:26:03.095 05:41:06 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:03.095 05:41:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.095 05:41:06 -- common/autotest_common.sh@10 -- # set +x 00:26:03.095 NVMe0n1 00:26:03.095 05:41:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.095 05:41:06 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:03.095 05:41:06 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:26:03.095 05:41:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.095 05:41:06 -- common/autotest_common.sh@10 -- # set +x 00:26:03.095 05:41:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.095 1 00:26:03.095 05:41:06 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:03.095 05:41:06 -- common/autotest_common.sh@650 -- # local es=0 00:26:03.095 05:41:06 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:03.095 05:41:06 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:03.095 05:41:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:03.095 05:41:06 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:03.095 05:41:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:03.095 05:41:06 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:03.095 05:41:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.095 05:41:06 -- common/autotest_common.sh@10 -- # set +x 00:26:03.095 request: 00:26:03.095 { 00:26:03.095 "name": "NVMe0", 00:26:03.095 "trtype": "tcp", 00:26:03.095 "traddr": "10.0.0.2", 00:26:03.095 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:26:03.095 "hostaddr": "10.0.0.2", 00:26:03.095 "hostsvcid": "60000", 00:26:03.095 "adrfam": "ipv4", 00:26:03.095 "trsvcid": "4420", 00:26:03.095 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:03.095 "method": "bdev_nvme_attach_controller", 00:26:03.095 "req_id": 1 00:26:03.095 } 00:26:03.095 Got JSON-RPC error response 00:26:03.095 response: 00:26:03.095 { 00:26:03.095 "code": -114, 00:26:03.095 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:03.095 } 00:26:03.095 05:41:06 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:03.095 05:41:06 -- common/autotest_common.sh@653 -- # es=1 00:26:03.095 05:41:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:03.095 05:41:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:03.095 05:41:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:03.095 05:41:06 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:03.095 05:41:06 -- common/autotest_common.sh@650 -- # local es=0 00:26:03.096 05:41:06 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:03.096 05:41:06 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:03.096 05:41:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:03.096 05:41:06 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:03.096 05:41:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:03.096 05:41:06 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:03.096 05:41:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.096 05:41:06 -- common/autotest_common.sh@10 -- # set +x 00:26:03.096 request: 00:26:03.096 { 00:26:03.096 "name": "NVMe0", 00:26:03.096 "trtype": "tcp", 00:26:03.096 "traddr": "10.0.0.2", 00:26:03.096 "hostaddr": "10.0.0.2", 00:26:03.096 "hostsvcid": "60000", 00:26:03.096 "adrfam": "ipv4", 00:26:03.096 "trsvcid": "4420", 00:26:03.096 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:03.096 "method": "bdev_nvme_attach_controller", 00:26:03.096 "req_id": 1 00:26:03.096 } 00:26:03.096 Got JSON-RPC error response 00:26:03.096 response: 00:26:03.096 { 00:26:03.096 "code": -114, 00:26:03.096 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:03.096 } 00:26:03.096 05:41:06 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:03.096 05:41:06 -- common/autotest_common.sh@653 -- # es=1 00:26:03.096 05:41:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:03.096 05:41:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:03.096 05:41:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:03.096 05:41:06 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:03.096 05:41:06 -- common/autotest_common.sh@650 -- # local es=0 00:26:03.096 05:41:06 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:03.096 05:41:06 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:03.096 05:41:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:03.096 05:41:06 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:03.096 05:41:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:03.096 05:41:06 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:03.096 05:41:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.096 05:41:06 -- common/autotest_common.sh@10 -- # set +x 00:26:03.096 request: 00:26:03.096 { 00:26:03.096 "name": "NVMe0", 00:26:03.096 "trtype": "tcp", 00:26:03.096 "traddr": "10.0.0.2", 00:26:03.096 "hostaddr": "10.0.0.2", 00:26:03.096 "hostsvcid": "60000", 00:26:03.096 "adrfam": "ipv4", 00:26:03.096 "trsvcid": "4420", 00:26:03.096 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:03.096 "multipath": "disable", 00:26:03.096 "method": "bdev_nvme_attach_controller", 00:26:03.096 "req_id": 1 00:26:03.096 } 00:26:03.096 Got JSON-RPC error response 00:26:03.096 response: 00:26:03.096 { 00:26:03.096 "code": -114, 00:26:03.096 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:26:03.096 } 00:26:03.096 05:41:06 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:03.096 05:41:06 -- common/autotest_common.sh@653 -- # es=1 00:26:03.096 05:41:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:03.096 05:41:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:03.096 05:41:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:03.096 05:41:06 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:03.096 05:41:06 -- common/autotest_common.sh@650 -- # local es=0 00:26:03.096 05:41:06 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:03.096 05:41:06 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:03.096 05:41:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:03.096 05:41:06 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:03.096 05:41:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:03.096 05:41:06 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:03.096 05:41:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.096 05:41:06 -- common/autotest_common.sh@10 -- # set +x 00:26:03.096 request: 00:26:03.096 { 00:26:03.096 "name": "NVMe0", 00:26:03.096 "trtype": "tcp", 00:26:03.096 "traddr": "10.0.0.2", 00:26:03.096 "hostaddr": "10.0.0.2", 00:26:03.096 "hostsvcid": "60000", 00:26:03.096 "adrfam": "ipv4", 00:26:03.096 "trsvcid": "4420", 00:26:03.096 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:03.096 "multipath": "failover", 00:26:03.096 "method": "bdev_nvme_attach_controller", 00:26:03.096 "req_id": 1 00:26:03.096 } 00:26:03.096 Got JSON-RPC error response 00:26:03.096 response: 00:26:03.096 { 00:26:03.096 "code": -114, 00:26:03.096 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:03.096 } 00:26:03.096 05:41:06 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:03.096 05:41:06 -- common/autotest_common.sh@653 -- # es=1 00:26:03.096 05:41:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:03.096 05:41:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:03.096 05:41:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:03.096 05:41:06 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:03.096 05:41:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.096 05:41:06 -- common/autotest_common.sh@10 -- # set +x 00:26:03.096 00:26:03.096 05:41:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.096 05:41:06 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:03.096 05:41:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.096 05:41:06 -- common/autotest_common.sh@10 -- # set +x 00:26:03.096 05:41:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.096 05:41:06 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:03.096 05:41:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.096 05:41:06 -- common/autotest_common.sh@10 -- # set +x 00:26:03.358 00:26:03.358 05:41:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.358 05:41:06 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:03.358 05:41:06 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:26:03.358 05:41:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.358 05:41:06 -- common/autotest_common.sh@10 -- # set +x 00:26:03.358 05:41:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.358 05:41:06 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:26:03.358 05:41:06 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:04.755 0 00:26:04.755 05:41:07 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:26:04.755 05:41:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.755 05:41:07 -- common/autotest_common.sh@10 -- # set +x 00:26:04.755 05:41:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.755 05:41:07 -- host/multicontroller.sh@100 -- # killprocess 1942822 00:26:04.755 05:41:07 -- common/autotest_common.sh@936 -- # '[' -z 1942822 ']' 00:26:04.755 05:41:07 -- common/autotest_common.sh@940 -- # kill -0 1942822 00:26:04.755 05:41:07 -- common/autotest_common.sh@941 -- # uname 00:26:04.755 05:41:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:04.755 05:41:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1942822 00:26:04.755 05:41:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:04.755 05:41:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:04.755 05:41:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1942822' 00:26:04.755 killing process with pid 1942822 00:26:04.755 05:41:07 -- common/autotest_common.sh@955 -- # kill 1942822 00:26:04.755 05:41:07 -- common/autotest_common.sh@960 -- # wait 1942822 00:26:04.755 05:41:07 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:04.755 05:41:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.755 05:41:07 -- common/autotest_common.sh@10 -- # set +x 00:26:04.755 05:41:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.755 05:41:07 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:04.755 05:41:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.755 05:41:07 -- common/autotest_common.sh@10 -- # set +x 00:26:04.755 05:41:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.755 05:41:07 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:26:04.755 05:41:07 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:04.755 05:41:07 -- common/autotest_common.sh@1607 -- # read -r file 00:26:04.755 05:41:07 -- common/autotest_common.sh@1606 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:26:04.755 05:41:07 -- common/autotest_common.sh@1606 -- # sort -u 00:26:04.755 05:41:07 -- common/autotest_common.sh@1608 -- # cat 00:26:04.755 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:04.755 [2024-12-07 05:41:05.236318] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:04.755 [2024-12-07 05:41:05.236374] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1942822 ] 00:26:04.755 EAL: No free 2048 kB hugepages reported on node 1 00:26:04.755 [2024-12-07 05:41:05.297217] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.755 [2024-12-07 05:41:05.359422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.755 [2024-12-07 05:41:06.492772] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name caccfe4f-262c-48d3-8154-4d92d527c837 already exists 00:26:04.755 [2024-12-07 05:41:06.492803] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:caccfe4f-262c-48d3-8154-4d92d527c837 alias for bdev NVMe1n1 00:26:04.755 [2024-12-07 05:41:06.492814] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:26:04.755 Running I/O for 1 seconds... 00:26:04.755 00:26:04.755 Latency(us) 00:26:04.755 [2024-12-07T04:41:07.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:04.755 [2024-12-07T04:41:07.995Z] Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:26:04.755 NVMe0n1 : 1.00 21422.48 83.68 0.00 0.00 5963.35 2293.76 12943.36 00:26:04.755 [2024-12-07T04:41:07.995Z] =================================================================================================================== 00:26:04.755 [2024-12-07T04:41:07.995Z] Total : 21422.48 83.68 0.00 0.00 5963.35 2293.76 12943.36 00:26:04.755 Received shutdown signal, test time was about 1.000000 seconds 00:26:04.755 00:26:04.755 Latency(us) 00:26:04.755 [2024-12-07T04:41:07.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:04.755 [2024-12-07T04:41:07.995Z] =================================================================================================================== 00:26:04.755 [2024-12-07T04:41:07.995Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:04.755 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:04.755 05:41:07 -- common/autotest_common.sh@1613 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:04.756 05:41:07 -- common/autotest_common.sh@1607 -- # read -r file 00:26:04.756 05:41:07 -- host/multicontroller.sh@108 -- # nvmftestfini 00:26:04.756 05:41:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:04.756 05:41:07 -- nvmf/common.sh@116 -- # sync 00:26:04.756 05:41:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:04.756 05:41:07 -- nvmf/common.sh@119 -- # set +e 00:26:04.756 05:41:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:04.756 05:41:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:04.756 rmmod nvme_tcp 00:26:04.756 rmmod nvme_fabrics 00:26:04.756 rmmod nvme_keyring 00:26:04.756 05:41:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:04.756 05:41:07 -- nvmf/common.sh@123 -- # set -e 00:26:04.756 05:41:07 -- nvmf/common.sh@124 -- # return 0 00:26:04.756 05:41:07 -- nvmf/common.sh@477 -- # '[' -n 1942473 ']' 00:26:04.756 05:41:07 -- nvmf/common.sh@478 -- # killprocess 1942473 00:26:04.756 05:41:07 -- common/autotest_common.sh@936 -- # '[' -z 1942473 ']' 00:26:04.756 05:41:07 -- common/autotest_common.sh@940 -- # kill -0 1942473 00:26:04.756 05:41:07 -- common/autotest_common.sh@941 -- # uname 00:26:04.756 05:41:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:04.756 05:41:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1942473 00:26:05.017 05:41:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:05.017 05:41:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:05.017 05:41:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1942473' 00:26:05.017 killing process with pid 1942473 00:26:05.017 05:41:08 -- common/autotest_common.sh@955 -- # kill 1942473 00:26:05.017 05:41:08 -- common/autotest_common.sh@960 -- # wait 1942473 00:26:05.017 05:41:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:05.017 05:41:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:05.017 05:41:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:05.017 05:41:08 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:05.017 05:41:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:05.017 05:41:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.017 05:41:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:05.017 05:41:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:07.556 05:41:10 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:07.556 00:26:07.556 real 0m13.751s 00:26:07.556 user 0m16.440s 00:26:07.556 sys 0m6.324s 00:26:07.556 05:41:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:07.556 05:41:10 -- common/autotest_common.sh@10 -- # set +x 00:26:07.556 ************************************ 00:26:07.556 END TEST nvmf_multicontroller 00:26:07.556 ************************************ 00:26:07.556 05:41:10 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:07.556 05:41:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:07.556 05:41:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:07.556 05:41:10 -- common/autotest_common.sh@10 -- # set +x 00:26:07.556 ************************************ 00:26:07.556 START TEST nvmf_aer 00:26:07.556 ************************************ 00:26:07.556 05:41:10 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:07.556 * Looking for test storage... 00:26:07.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:07.556 05:41:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:07.556 05:41:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:07.556 05:41:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:07.556 05:41:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:07.556 05:41:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:07.556 05:41:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:07.556 05:41:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:07.556 05:41:10 -- scripts/common.sh@335 -- # IFS=.-: 00:26:07.556 05:41:10 -- scripts/common.sh@335 -- # read -ra ver1 00:26:07.556 05:41:10 -- scripts/common.sh@336 -- # IFS=.-: 00:26:07.556 05:41:10 -- scripts/common.sh@336 -- # read -ra ver2 00:26:07.556 05:41:10 -- scripts/common.sh@337 -- # local 'op=<' 00:26:07.556 05:41:10 -- scripts/common.sh@339 -- # ver1_l=2 00:26:07.556 05:41:10 -- scripts/common.sh@340 -- # ver2_l=1 00:26:07.556 05:41:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:07.556 05:41:10 -- scripts/common.sh@343 -- # case "$op" in 00:26:07.556 05:41:10 -- scripts/common.sh@344 -- # : 1 00:26:07.556 05:41:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:07.556 05:41:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:07.556 05:41:10 -- scripts/common.sh@364 -- # decimal 1 00:26:07.556 05:41:10 -- scripts/common.sh@352 -- # local d=1 00:26:07.556 05:41:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:07.556 05:41:10 -- scripts/common.sh@354 -- # echo 1 00:26:07.556 05:41:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:07.556 05:41:10 -- scripts/common.sh@365 -- # decimal 2 00:26:07.556 05:41:10 -- scripts/common.sh@352 -- # local d=2 00:26:07.556 05:41:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:07.556 05:41:10 -- scripts/common.sh@354 -- # echo 2 00:26:07.556 05:41:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:07.556 05:41:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:07.556 05:41:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:07.556 05:41:10 -- scripts/common.sh@367 -- # return 0 00:26:07.556 05:41:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:07.556 05:41:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:07.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.556 --rc genhtml_branch_coverage=1 00:26:07.556 --rc genhtml_function_coverage=1 00:26:07.556 --rc genhtml_legend=1 00:26:07.556 --rc geninfo_all_blocks=1 00:26:07.556 --rc geninfo_unexecuted_blocks=1 00:26:07.556 00:26:07.556 ' 00:26:07.556 05:41:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:07.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.556 --rc genhtml_branch_coverage=1 00:26:07.556 --rc genhtml_function_coverage=1 00:26:07.556 --rc genhtml_legend=1 00:26:07.556 --rc geninfo_all_blocks=1 00:26:07.556 --rc geninfo_unexecuted_blocks=1 00:26:07.556 00:26:07.556 ' 00:26:07.556 05:41:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:07.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.556 --rc genhtml_branch_coverage=1 00:26:07.557 --rc genhtml_function_coverage=1 00:26:07.557 --rc genhtml_legend=1 00:26:07.557 --rc geninfo_all_blocks=1 00:26:07.557 --rc geninfo_unexecuted_blocks=1 00:26:07.557 00:26:07.557 ' 00:26:07.557 05:41:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:07.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.557 --rc genhtml_branch_coverage=1 00:26:07.557 --rc genhtml_function_coverage=1 00:26:07.557 --rc genhtml_legend=1 00:26:07.557 --rc geninfo_all_blocks=1 00:26:07.557 --rc geninfo_unexecuted_blocks=1 00:26:07.557 00:26:07.557 ' 00:26:07.557 05:41:10 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:07.557 05:41:10 -- nvmf/common.sh@7 -- # uname -s 00:26:07.557 05:41:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:07.557 05:41:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:07.557 05:41:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:07.557 05:41:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:07.557 05:41:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:07.557 05:41:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:07.557 05:41:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:07.557 05:41:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:07.557 05:41:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:07.557 05:41:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:07.557 05:41:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:07.557 05:41:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:07.557 05:41:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:07.557 05:41:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:07.557 05:41:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:07.557 05:41:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:07.557 05:41:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:07.557 05:41:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:07.557 05:41:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:07.557 05:41:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.557 05:41:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.557 05:41:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.557 05:41:10 -- paths/export.sh@5 -- # export PATH 00:26:07.557 05:41:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.557 05:41:10 -- nvmf/common.sh@46 -- # : 0 00:26:07.557 05:41:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:07.557 05:41:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:07.557 05:41:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:07.557 05:41:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:07.557 05:41:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:07.557 05:41:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:07.557 05:41:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:07.557 05:41:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:07.557 05:41:10 -- host/aer.sh@11 -- # nvmftestinit 00:26:07.557 05:41:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:07.557 05:41:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:07.557 05:41:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:07.557 05:41:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:07.557 05:41:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:07.557 05:41:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.557 05:41:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:07.557 05:41:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:07.557 05:41:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:07.557 05:41:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:07.557 05:41:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:07.557 05:41:10 -- common/autotest_common.sh@10 -- # set +x 00:26:15.701 05:41:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:15.701 05:41:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:15.701 05:41:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:15.701 05:41:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:15.701 05:41:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:15.701 05:41:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:15.701 05:41:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:15.701 05:41:17 -- nvmf/common.sh@294 -- # net_devs=() 00:26:15.701 05:41:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:15.701 05:41:17 -- nvmf/common.sh@295 -- # e810=() 00:26:15.701 05:41:17 -- nvmf/common.sh@295 -- # local -ga e810 00:26:15.701 05:41:17 -- nvmf/common.sh@296 -- # x722=() 00:26:15.701 05:41:17 -- nvmf/common.sh@296 -- # local -ga x722 00:26:15.701 05:41:17 -- nvmf/common.sh@297 -- # mlx=() 00:26:15.701 05:41:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:15.701 05:41:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:15.701 05:41:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:15.701 05:41:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:15.701 05:41:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:15.701 05:41:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:15.701 05:41:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:15.701 05:41:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:15.701 05:41:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:15.701 05:41:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:15.701 05:41:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:15.701 05:41:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:15.701 05:41:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:15.701 05:41:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:15.701 05:41:17 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:15.701 05:41:17 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:15.701 05:41:17 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:15.701 05:41:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:15.701 05:41:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:15.701 05:41:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:15.701 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:15.701 05:41:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:15.701 05:41:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:15.701 05:41:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:15.701 05:41:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:15.701 05:41:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:15.701 05:41:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:15.701 05:41:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:15.701 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:15.701 05:41:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:15.701 05:41:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:15.701 05:41:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:15.701 05:41:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:15.701 05:41:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:15.701 05:41:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:15.701 05:41:17 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:15.701 05:41:17 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:15.701 05:41:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:15.701 05:41:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:15.701 05:41:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:15.701 05:41:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:15.701 05:41:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:15.701 Found net devices under 0000:31:00.0: cvl_0_0 00:26:15.701 05:41:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:15.701 05:41:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:15.701 05:41:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:15.701 05:41:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:15.701 05:41:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:15.701 05:41:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:15.701 Found net devices under 0000:31:00.1: cvl_0_1 00:26:15.701 05:41:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:15.701 05:41:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:15.701 05:41:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:15.701 05:41:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:15.701 05:41:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:15.701 05:41:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:15.701 05:41:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:15.701 05:41:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:15.701 05:41:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:15.701 05:41:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:15.701 05:41:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:15.701 05:41:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:15.701 05:41:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:15.701 05:41:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:15.701 05:41:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:15.701 05:41:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:15.701 05:41:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:15.701 05:41:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:15.701 05:41:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:15.701 05:41:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:15.701 05:41:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:15.701 05:41:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:15.701 05:41:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:15.701 05:41:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:15.701 05:41:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:15.701 05:41:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:15.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:15.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:26:15.701 00:26:15.701 --- 10.0.0.2 ping statistics --- 00:26:15.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:15.701 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:26:15.701 05:41:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:15.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:15.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:26:15.701 00:26:15.701 --- 10.0.0.1 ping statistics --- 00:26:15.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:15.701 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:26:15.701 05:41:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:15.701 05:41:17 -- nvmf/common.sh@410 -- # return 0 00:26:15.701 05:41:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:15.701 05:41:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:15.701 05:41:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:15.701 05:41:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:15.701 05:41:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:15.701 05:41:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:15.701 05:41:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:15.701 05:41:18 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:26:15.701 05:41:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:15.701 05:41:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:15.701 05:41:18 -- common/autotest_common.sh@10 -- # set +x 00:26:15.701 05:41:18 -- nvmf/common.sh@469 -- # nvmfpid=1947602 00:26:15.701 05:41:18 -- nvmf/common.sh@470 -- # waitforlisten 1947602 00:26:15.701 05:41:18 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:15.701 05:41:18 -- common/autotest_common.sh@829 -- # '[' -z 1947602 ']' 00:26:15.701 05:41:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:15.701 05:41:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:15.701 05:41:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:15.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:15.701 05:41:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:15.701 05:41:18 -- common/autotest_common.sh@10 -- # set +x 00:26:15.701 [2024-12-07 05:41:18.067665] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:15.701 [2024-12-07 05:41:18.067760] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:15.701 EAL: No free 2048 kB hugepages reported on node 1 00:26:15.701 [2024-12-07 05:41:18.146355] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:15.701 [2024-12-07 05:41:18.219338] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:15.701 [2024-12-07 05:41:18.219467] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:15.701 [2024-12-07 05:41:18.219476] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:15.701 [2024-12-07 05:41:18.219484] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:15.701 [2024-12-07 05:41:18.219631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:15.701 [2024-12-07 05:41:18.219749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:15.701 [2024-12-07 05:41:18.219907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:15.701 [2024-12-07 05:41:18.219907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:15.701 05:41:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:15.701 05:41:18 -- common/autotest_common.sh@862 -- # return 0 00:26:15.701 05:41:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:15.701 05:41:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:15.701 05:41:18 -- common/autotest_common.sh@10 -- # set +x 00:26:15.701 05:41:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:15.701 05:41:18 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:15.702 05:41:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.702 05:41:18 -- common/autotest_common.sh@10 -- # set +x 00:26:15.702 [2024-12-07 05:41:18.904200] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:15.702 05:41:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.702 05:41:18 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:26:15.702 05:41:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.702 05:41:18 -- common/autotest_common.sh@10 -- # set +x 00:26:15.702 Malloc0 00:26:15.702 05:41:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.702 05:41:18 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:26:15.702 05:41:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.702 05:41:18 -- common/autotest_common.sh@10 -- # set +x 00:26:15.963 05:41:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.963 05:41:18 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:15.963 05:41:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.963 05:41:18 -- common/autotest_common.sh@10 -- # set +x 00:26:15.963 05:41:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.963 05:41:18 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:15.963 05:41:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.963 05:41:18 -- common/autotest_common.sh@10 -- # set +x 00:26:15.963 [2024-12-07 05:41:18.963606] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:15.963 05:41:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.963 05:41:18 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:26:15.963 05:41:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.963 05:41:18 -- common/autotest_common.sh@10 -- # set +x 00:26:15.963 [2024-12-07 05:41:18.975390] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:15.963 [ 00:26:15.963 { 00:26:15.963 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:15.963 "subtype": "Discovery", 00:26:15.963 "listen_addresses": [], 00:26:15.963 "allow_any_host": true, 00:26:15.963 "hosts": [] 00:26:15.963 }, 00:26:15.963 { 00:26:15.963 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:15.963 "subtype": "NVMe", 00:26:15.963 "listen_addresses": [ 00:26:15.963 { 00:26:15.963 "transport": "TCP", 00:26:15.963 "trtype": "TCP", 00:26:15.963 "adrfam": "IPv4", 00:26:15.963 "traddr": "10.0.0.2", 00:26:15.963 "trsvcid": "4420" 00:26:15.963 } 00:26:15.963 ], 00:26:15.963 "allow_any_host": true, 00:26:15.963 "hosts": [], 00:26:15.963 "serial_number": "SPDK00000000000001", 00:26:15.963 "model_number": "SPDK bdev Controller", 00:26:15.963 "max_namespaces": 2, 00:26:15.963 "min_cntlid": 1, 00:26:15.963 "max_cntlid": 65519, 00:26:15.963 "namespaces": [ 00:26:15.963 { 00:26:15.963 "nsid": 1, 00:26:15.963 "bdev_name": "Malloc0", 00:26:15.964 "name": "Malloc0", 00:26:15.964 "nguid": "E466F4F3A7C94FBE8BF1FA5BC75C78E6", 00:26:15.964 "uuid": "e466f4f3-a7c9-4fbe-8bf1-fa5bc75c78e6" 00:26:15.964 } 00:26:15.964 ] 00:26:15.964 } 00:26:15.964 ] 00:26:15.964 05:41:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.964 05:41:18 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:26:15.964 05:41:18 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:26:15.964 05:41:18 -- host/aer.sh@33 -- # aerpid=1947734 00:26:15.964 05:41:18 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:26:15.964 05:41:18 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:26:15.964 05:41:18 -- common/autotest_common.sh@1254 -- # local i=0 00:26:15.964 05:41:18 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:15.964 05:41:18 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:26:15.964 05:41:18 -- common/autotest_common.sh@1257 -- # i=1 00:26:15.964 05:41:18 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:26:15.964 EAL: No free 2048 kB hugepages reported on node 1 00:26:15.964 05:41:19 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:15.964 05:41:19 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:26:15.964 05:41:19 -- common/autotest_common.sh@1257 -- # i=2 00:26:15.964 05:41:19 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:26:16.225 05:41:19 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:16.225 05:41:19 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:16.225 05:41:19 -- common/autotest_common.sh@1265 -- # return 0 00:26:16.225 05:41:19 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:26:16.225 05:41:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.225 05:41:19 -- common/autotest_common.sh@10 -- # set +x 00:26:16.225 Malloc1 00:26:16.225 05:41:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.225 05:41:19 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:26:16.225 05:41:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.225 05:41:19 -- common/autotest_common.sh@10 -- # set +x 00:26:16.225 05:41:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.225 05:41:19 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:26:16.225 05:41:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.225 05:41:19 -- common/autotest_common.sh@10 -- # set +x 00:26:16.225 [ 00:26:16.225 { 00:26:16.225 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:16.225 "subtype": "Discovery", 00:26:16.225 "listen_addresses": [], 00:26:16.225 "allow_any_host": true, 00:26:16.225 "hosts": [] 00:26:16.225 }, 00:26:16.225 { 00:26:16.225 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:16.225 "subtype": "NVMe", 00:26:16.225 "listen_addresses": [ 00:26:16.225 { 00:26:16.225 "transport": "TCP", 00:26:16.225 "trtype": "TCP", 00:26:16.225 "adrfam": "IPv4", 00:26:16.225 "traddr": "10.0.0.2", 00:26:16.225 "trsvcid": "4420" 00:26:16.225 } 00:26:16.225 ], 00:26:16.225 "allow_any_host": true, 00:26:16.225 "hosts": [], 00:26:16.225 "serial_number": "SPDK00000000000001", 00:26:16.225 "model_number": "SPDK bdev Controller", 00:26:16.225 "max_namespaces": 2, 00:26:16.225 "min_cntlid": 1, 00:26:16.225 "max_cntlid": 65519, 00:26:16.225 "namespaces": [ 00:26:16.225 { 00:26:16.225 "nsid": 1, 00:26:16.225 "bdev_name": "Malloc0", 00:26:16.225 "name": "Malloc0", 00:26:16.225 Asynchronous Event Request test 00:26:16.225 Attaching to 10.0.0.2 00:26:16.225 Attached to 10.0.0.2 00:26:16.225 Registering asynchronous event callbacks... 00:26:16.225 Starting namespace attribute notice tests for all controllers... 00:26:16.225 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:26:16.225 aer_cb - Changed Namespace 00:26:16.225 Cleaning up... 00:26:16.225 "nguid": "E466F4F3A7C94FBE8BF1FA5BC75C78E6", 00:26:16.225 "uuid": "e466f4f3-a7c9-4fbe-8bf1-fa5bc75c78e6" 00:26:16.225 }, 00:26:16.225 { 00:26:16.225 "nsid": 2, 00:26:16.225 "bdev_name": "Malloc1", 00:26:16.225 "name": "Malloc1", 00:26:16.225 "nguid": "EE1F6482156446CDA9415C839BDDB06B", 00:26:16.225 "uuid": "ee1f6482-1564-46cd-a941-5c839bddb06b" 00:26:16.225 } 00:26:16.225 ] 00:26:16.225 } 00:26:16.225 ] 00:26:16.225 05:41:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.225 05:41:19 -- host/aer.sh@43 -- # wait 1947734 00:26:16.225 05:41:19 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:16.225 05:41:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.225 05:41:19 -- common/autotest_common.sh@10 -- # set +x 00:26:16.225 05:41:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.225 05:41:19 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:16.225 05:41:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.225 05:41:19 -- common/autotest_common.sh@10 -- # set +x 00:26:16.225 05:41:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.225 05:41:19 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:16.225 05:41:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.225 05:41:19 -- common/autotest_common.sh@10 -- # set +x 00:26:16.225 05:41:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.225 05:41:19 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:26:16.225 05:41:19 -- host/aer.sh@51 -- # nvmftestfini 00:26:16.225 05:41:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:16.225 05:41:19 -- nvmf/common.sh@116 -- # sync 00:26:16.225 05:41:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:16.225 05:41:19 -- nvmf/common.sh@119 -- # set +e 00:26:16.225 05:41:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:16.225 05:41:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:16.225 rmmod nvme_tcp 00:26:16.225 rmmod nvme_fabrics 00:26:16.225 rmmod nvme_keyring 00:26:16.225 05:41:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:16.225 05:41:19 -- nvmf/common.sh@123 -- # set -e 00:26:16.225 05:41:19 -- nvmf/common.sh@124 -- # return 0 00:26:16.225 05:41:19 -- nvmf/common.sh@477 -- # '[' -n 1947602 ']' 00:26:16.225 05:41:19 -- nvmf/common.sh@478 -- # killprocess 1947602 00:26:16.225 05:41:19 -- common/autotest_common.sh@936 -- # '[' -z 1947602 ']' 00:26:16.225 05:41:19 -- common/autotest_common.sh@940 -- # kill -0 1947602 00:26:16.225 05:41:19 -- common/autotest_common.sh@941 -- # uname 00:26:16.225 05:41:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:16.225 05:41:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1947602 00:26:16.486 05:41:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:16.486 05:41:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:16.486 05:41:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1947602' 00:26:16.486 killing process with pid 1947602 00:26:16.486 05:41:19 -- common/autotest_common.sh@955 -- # kill 1947602 00:26:16.486 [2024-12-07 05:41:19.468031] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:16.486 05:41:19 -- common/autotest_common.sh@960 -- # wait 1947602 00:26:16.486 05:41:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:16.486 05:41:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:16.486 05:41:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:16.486 05:41:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:16.486 05:41:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:16.486 05:41:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:16.486 05:41:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:16.486 05:41:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:19.032 05:41:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:19.032 00:26:19.032 real 0m11.363s 00:26:19.032 user 0m7.788s 00:26:19.032 sys 0m5.982s 00:26:19.032 05:41:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:19.032 05:41:21 -- common/autotest_common.sh@10 -- # set +x 00:26:19.032 ************************************ 00:26:19.032 END TEST nvmf_aer 00:26:19.032 ************************************ 00:26:19.032 05:41:21 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:19.032 05:41:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:19.032 05:41:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:19.032 05:41:21 -- common/autotest_common.sh@10 -- # set +x 00:26:19.032 ************************************ 00:26:19.032 START TEST nvmf_async_init 00:26:19.032 ************************************ 00:26:19.032 05:41:21 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:19.032 * Looking for test storage... 00:26:19.032 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:19.032 05:41:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:19.032 05:41:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:19.032 05:41:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:19.032 05:41:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:19.032 05:41:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:19.032 05:41:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:19.032 05:41:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:19.032 05:41:21 -- scripts/common.sh@335 -- # IFS=.-: 00:26:19.032 05:41:21 -- scripts/common.sh@335 -- # read -ra ver1 00:26:19.032 05:41:21 -- scripts/common.sh@336 -- # IFS=.-: 00:26:19.032 05:41:21 -- scripts/common.sh@336 -- # read -ra ver2 00:26:19.032 05:41:21 -- scripts/common.sh@337 -- # local 'op=<' 00:26:19.032 05:41:21 -- scripts/common.sh@339 -- # ver1_l=2 00:26:19.032 05:41:21 -- scripts/common.sh@340 -- # ver2_l=1 00:26:19.032 05:41:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:19.032 05:41:21 -- scripts/common.sh@343 -- # case "$op" in 00:26:19.032 05:41:21 -- scripts/common.sh@344 -- # : 1 00:26:19.032 05:41:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:19.032 05:41:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:19.032 05:41:21 -- scripts/common.sh@364 -- # decimal 1 00:26:19.032 05:41:21 -- scripts/common.sh@352 -- # local d=1 00:26:19.032 05:41:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:19.032 05:41:21 -- scripts/common.sh@354 -- # echo 1 00:26:19.032 05:41:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:19.032 05:41:21 -- scripts/common.sh@365 -- # decimal 2 00:26:19.032 05:41:21 -- scripts/common.sh@352 -- # local d=2 00:26:19.032 05:41:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:19.032 05:41:21 -- scripts/common.sh@354 -- # echo 2 00:26:19.032 05:41:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:19.032 05:41:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:19.032 05:41:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:19.032 05:41:21 -- scripts/common.sh@367 -- # return 0 00:26:19.032 05:41:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:19.032 05:41:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:19.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.032 --rc genhtml_branch_coverage=1 00:26:19.032 --rc genhtml_function_coverage=1 00:26:19.032 --rc genhtml_legend=1 00:26:19.032 --rc geninfo_all_blocks=1 00:26:19.032 --rc geninfo_unexecuted_blocks=1 00:26:19.032 00:26:19.032 ' 00:26:19.032 05:41:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:19.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.032 --rc genhtml_branch_coverage=1 00:26:19.032 --rc genhtml_function_coverage=1 00:26:19.032 --rc genhtml_legend=1 00:26:19.032 --rc geninfo_all_blocks=1 00:26:19.032 --rc geninfo_unexecuted_blocks=1 00:26:19.032 00:26:19.032 ' 00:26:19.032 05:41:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:19.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.032 --rc genhtml_branch_coverage=1 00:26:19.032 --rc genhtml_function_coverage=1 00:26:19.032 --rc genhtml_legend=1 00:26:19.032 --rc geninfo_all_blocks=1 00:26:19.032 --rc geninfo_unexecuted_blocks=1 00:26:19.032 00:26:19.032 ' 00:26:19.032 05:41:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:19.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.032 --rc genhtml_branch_coverage=1 00:26:19.032 --rc genhtml_function_coverage=1 00:26:19.032 --rc genhtml_legend=1 00:26:19.032 --rc geninfo_all_blocks=1 00:26:19.032 --rc geninfo_unexecuted_blocks=1 00:26:19.032 00:26:19.032 ' 00:26:19.032 05:41:21 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:19.032 05:41:21 -- nvmf/common.sh@7 -- # uname -s 00:26:19.032 05:41:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:19.032 05:41:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:19.032 05:41:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:19.032 05:41:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:19.032 05:41:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:19.032 05:41:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:19.032 05:41:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:19.032 05:41:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:19.032 05:41:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:19.032 05:41:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:19.032 05:41:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:19.032 05:41:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:19.032 05:41:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:19.032 05:41:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:19.032 05:41:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:19.032 05:41:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:19.032 05:41:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:19.032 05:41:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:19.032 05:41:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:19.032 05:41:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.032 05:41:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.032 05:41:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.032 05:41:21 -- paths/export.sh@5 -- # export PATH 00:26:19.033 05:41:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.033 05:41:21 -- nvmf/common.sh@46 -- # : 0 00:26:19.033 05:41:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:19.033 05:41:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:19.033 05:41:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:19.033 05:41:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:19.033 05:41:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:19.033 05:41:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:19.033 05:41:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:19.033 05:41:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:19.033 05:41:21 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:26:19.033 05:41:21 -- host/async_init.sh@14 -- # null_block_size=512 00:26:19.033 05:41:21 -- host/async_init.sh@15 -- # null_bdev=null0 00:26:19.033 05:41:21 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:26:19.033 05:41:21 -- host/async_init.sh@20 -- # tr -d - 00:26:19.033 05:41:21 -- host/async_init.sh@20 -- # uuidgen 00:26:19.033 05:41:21 -- host/async_init.sh@20 -- # nguid=c4612fcfddab4daeb0f98af90235a340 00:26:19.033 05:41:21 -- host/async_init.sh@22 -- # nvmftestinit 00:26:19.033 05:41:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:19.033 05:41:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:19.033 05:41:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:19.033 05:41:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:19.033 05:41:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:19.033 05:41:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.033 05:41:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:19.033 05:41:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:19.033 05:41:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:19.033 05:41:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:19.033 05:41:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:19.033 05:41:21 -- common/autotest_common.sh@10 -- # set +x 00:26:27.173 05:41:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:27.173 05:41:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:27.173 05:41:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:27.173 05:41:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:27.173 05:41:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:27.173 05:41:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:27.173 05:41:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:27.173 05:41:29 -- nvmf/common.sh@294 -- # net_devs=() 00:26:27.173 05:41:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:27.173 05:41:29 -- nvmf/common.sh@295 -- # e810=() 00:26:27.173 05:41:29 -- nvmf/common.sh@295 -- # local -ga e810 00:26:27.173 05:41:29 -- nvmf/common.sh@296 -- # x722=() 00:26:27.173 05:41:29 -- nvmf/common.sh@296 -- # local -ga x722 00:26:27.173 05:41:29 -- nvmf/common.sh@297 -- # mlx=() 00:26:27.173 05:41:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:27.173 05:41:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:27.173 05:41:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:27.173 05:41:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:27.173 05:41:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:27.173 05:41:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:27.173 05:41:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:27.173 05:41:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:27.173 05:41:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:27.173 05:41:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:27.173 05:41:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:27.173 05:41:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:27.173 05:41:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:27.173 05:41:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:27.173 05:41:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:27.173 05:41:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:27.173 05:41:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:27.173 05:41:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:27.173 05:41:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:27.173 05:41:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:27.173 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:27.173 05:41:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:27.173 05:41:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:27.173 05:41:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.173 05:41:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.173 05:41:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:27.173 05:41:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:27.173 05:41:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:27.173 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:27.173 05:41:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:27.173 05:41:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:27.173 05:41:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.173 05:41:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.173 05:41:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:27.173 05:41:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:27.173 05:41:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:27.173 05:41:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:27.173 05:41:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:27.173 05:41:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.173 05:41:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:27.173 05:41:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.173 05:41:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:27.173 Found net devices under 0000:31:00.0: cvl_0_0 00:26:27.173 05:41:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.173 05:41:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:27.173 05:41:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.173 05:41:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:27.173 05:41:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.173 05:41:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:27.173 Found net devices under 0000:31:00.1: cvl_0_1 00:26:27.173 05:41:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.173 05:41:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:27.173 05:41:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:27.173 05:41:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:27.173 05:41:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:27.173 05:41:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:27.173 05:41:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:27.173 05:41:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:27.173 05:41:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:27.173 05:41:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:27.173 05:41:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:27.173 05:41:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:27.173 05:41:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:27.173 05:41:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:27.173 05:41:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:27.173 05:41:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:27.173 05:41:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:27.173 05:41:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:27.173 05:41:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:27.173 05:41:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:27.173 05:41:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:27.173 05:41:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:27.173 05:41:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:27.173 05:41:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:27.173 05:41:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:27.173 05:41:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:27.173 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:27.173 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:26:27.173 00:26:27.173 --- 10.0.0.2 ping statistics --- 00:26:27.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.173 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:26:27.173 05:41:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:27.173 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:27.173 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:26:27.173 00:26:27.173 --- 10.0.0.1 ping statistics --- 00:26:27.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.173 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:26:27.173 05:41:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:27.173 05:41:29 -- nvmf/common.sh@410 -- # return 0 00:26:27.173 05:41:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:27.173 05:41:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:27.173 05:41:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:27.173 05:41:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:27.173 05:41:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:27.173 05:41:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:27.173 05:41:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:27.173 05:41:29 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:26:27.173 05:41:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:27.173 05:41:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:27.173 05:41:29 -- common/autotest_common.sh@10 -- # set +x 00:26:27.173 05:41:29 -- nvmf/common.sh@469 -- # nvmfpid=1952050 00:26:27.173 05:41:29 -- nvmf/common.sh@470 -- # waitforlisten 1952050 00:26:27.173 05:41:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:27.173 05:41:29 -- common/autotest_common.sh@829 -- # '[' -z 1952050 ']' 00:26:27.173 05:41:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.173 05:41:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:27.173 05:41:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:27.173 05:41:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:27.173 05:41:29 -- common/autotest_common.sh@10 -- # set +x 00:26:27.173 [2024-12-07 05:41:29.579851] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:27.173 [2024-12-07 05:41:29.579904] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:27.173 EAL: No free 2048 kB hugepages reported on node 1 00:26:27.173 [2024-12-07 05:41:29.651764] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.173 [2024-12-07 05:41:29.718727] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:27.173 [2024-12-07 05:41:29.718854] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:27.173 [2024-12-07 05:41:29.718863] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:27.173 [2024-12-07 05:41:29.718870] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:27.173 [2024-12-07 05:41:29.718898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:27.173 05:41:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:27.173 05:41:30 -- common/autotest_common.sh@862 -- # return 0 00:26:27.173 05:41:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:27.173 05:41:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:27.173 05:41:30 -- common/autotest_common.sh@10 -- # set +x 00:26:27.432 05:41:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:27.432 05:41:30 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:27.432 05:41:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.432 05:41:30 -- common/autotest_common.sh@10 -- # set +x 00:26:27.432 [2024-12-07 05:41:30.417884] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:27.432 05:41:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.432 05:41:30 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:26:27.432 05:41:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.432 05:41:30 -- common/autotest_common.sh@10 -- # set +x 00:26:27.432 null0 00:26:27.432 05:41:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.432 05:41:30 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:26:27.432 05:41:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.432 05:41:30 -- common/autotest_common.sh@10 -- # set +x 00:26:27.432 05:41:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.432 05:41:30 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:26:27.432 05:41:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.432 05:41:30 -- common/autotest_common.sh@10 -- # set +x 00:26:27.432 05:41:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.433 05:41:30 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g c4612fcfddab4daeb0f98af90235a340 00:26:27.433 05:41:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.433 05:41:30 -- common/autotest_common.sh@10 -- # set +x 00:26:27.433 05:41:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.433 05:41:30 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:27.433 05:41:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.433 05:41:30 -- common/autotest_common.sh@10 -- # set +x 00:26:27.433 [2024-12-07 05:41:30.478199] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:27.433 05:41:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.433 05:41:30 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:26:27.433 05:41:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.433 05:41:30 -- common/autotest_common.sh@10 -- # set +x 00:26:27.692 nvme0n1 00:26:27.692 05:41:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.692 05:41:30 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:27.692 05:41:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.692 05:41:30 -- common/autotest_common.sh@10 -- # set +x 00:26:27.692 [ 00:26:27.692 { 00:26:27.692 "name": "nvme0n1", 00:26:27.692 "aliases": [ 00:26:27.692 "c4612fcf-ddab-4dae-b0f9-8af90235a340" 00:26:27.692 ], 00:26:27.692 "product_name": "NVMe disk", 00:26:27.692 "block_size": 512, 00:26:27.692 "num_blocks": 2097152, 00:26:27.692 "uuid": "c4612fcf-ddab-4dae-b0f9-8af90235a340", 00:26:27.692 "assigned_rate_limits": { 00:26:27.692 "rw_ios_per_sec": 0, 00:26:27.692 "rw_mbytes_per_sec": 0, 00:26:27.692 "r_mbytes_per_sec": 0, 00:26:27.692 "w_mbytes_per_sec": 0 00:26:27.692 }, 00:26:27.692 "claimed": false, 00:26:27.692 "zoned": false, 00:26:27.692 "supported_io_types": { 00:26:27.692 "read": true, 00:26:27.692 "write": true, 00:26:27.692 "unmap": false, 00:26:27.692 "write_zeroes": true, 00:26:27.692 "flush": true, 00:26:27.692 "reset": true, 00:26:27.692 "compare": true, 00:26:27.692 "compare_and_write": true, 00:26:27.692 "abort": true, 00:26:27.692 "nvme_admin": true, 00:26:27.692 "nvme_io": true 00:26:27.692 }, 00:26:27.692 "driver_specific": { 00:26:27.692 "nvme": [ 00:26:27.692 { 00:26:27.692 "trid": { 00:26:27.692 "trtype": "TCP", 00:26:27.692 "adrfam": "IPv4", 00:26:27.692 "traddr": "10.0.0.2", 00:26:27.692 "trsvcid": "4420", 00:26:27.692 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:27.692 }, 00:26:27.692 "ctrlr_data": { 00:26:27.692 "cntlid": 1, 00:26:27.692 "vendor_id": "0x8086", 00:26:27.692 "model_number": "SPDK bdev Controller", 00:26:27.692 "serial_number": "00000000000000000000", 00:26:27.692 "firmware_revision": "24.01.1", 00:26:27.692 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:27.692 "oacs": { 00:26:27.692 "security": 0, 00:26:27.692 "format": 0, 00:26:27.692 "firmware": 0, 00:26:27.692 "ns_manage": 0 00:26:27.692 }, 00:26:27.692 "multi_ctrlr": true, 00:26:27.692 "ana_reporting": false 00:26:27.692 }, 00:26:27.692 "vs": { 00:26:27.692 "nvme_version": "1.3" 00:26:27.692 }, 00:26:27.692 "ns_data": { 00:26:27.692 "id": 1, 00:26:27.692 "can_share": true 00:26:27.692 } 00:26:27.692 } 00:26:27.692 ], 00:26:27.692 "mp_policy": "active_passive" 00:26:27.692 } 00:26:27.692 } 00:26:27.692 ] 00:26:27.692 05:41:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.692 05:41:30 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:26:27.692 05:41:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.692 05:41:30 -- common/autotest_common.sh@10 -- # set +x 00:26:27.692 [2024-12-07 05:41:30.750735] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:27.692 [2024-12-07 05:41:30.750796] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19510f0 (9): Bad file descriptor 00:26:27.692 [2024-12-07 05:41:30.883102] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:27.692 05:41:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.692 05:41:30 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:27.692 05:41:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.692 05:41:30 -- common/autotest_common.sh@10 -- # set +x 00:26:27.692 [ 00:26:27.692 { 00:26:27.692 "name": "nvme0n1", 00:26:27.692 "aliases": [ 00:26:27.692 "c4612fcf-ddab-4dae-b0f9-8af90235a340" 00:26:27.692 ], 00:26:27.693 "product_name": "NVMe disk", 00:26:27.693 "block_size": 512, 00:26:27.693 "num_blocks": 2097152, 00:26:27.693 "uuid": "c4612fcf-ddab-4dae-b0f9-8af90235a340", 00:26:27.693 "assigned_rate_limits": { 00:26:27.693 "rw_ios_per_sec": 0, 00:26:27.693 "rw_mbytes_per_sec": 0, 00:26:27.693 "r_mbytes_per_sec": 0, 00:26:27.693 "w_mbytes_per_sec": 0 00:26:27.693 }, 00:26:27.693 "claimed": false, 00:26:27.693 "zoned": false, 00:26:27.693 "supported_io_types": { 00:26:27.693 "read": true, 00:26:27.693 "write": true, 00:26:27.693 "unmap": false, 00:26:27.693 "write_zeroes": true, 00:26:27.693 "flush": true, 00:26:27.693 "reset": true, 00:26:27.693 "compare": true, 00:26:27.693 "compare_and_write": true, 00:26:27.693 "abort": true, 00:26:27.693 "nvme_admin": true, 00:26:27.693 "nvme_io": true 00:26:27.693 }, 00:26:27.693 "driver_specific": { 00:26:27.693 "nvme": [ 00:26:27.693 { 00:26:27.693 "trid": { 00:26:27.693 "trtype": "TCP", 00:26:27.693 "adrfam": "IPv4", 00:26:27.693 "traddr": "10.0.0.2", 00:26:27.693 "trsvcid": "4420", 00:26:27.693 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:27.693 }, 00:26:27.693 "ctrlr_data": { 00:26:27.693 "cntlid": 2, 00:26:27.693 "vendor_id": "0x8086", 00:26:27.693 "model_number": "SPDK bdev Controller", 00:26:27.693 "serial_number": "00000000000000000000", 00:26:27.693 "firmware_revision": "24.01.1", 00:26:27.693 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:27.693 "oacs": { 00:26:27.693 "security": 0, 00:26:27.693 "format": 0, 00:26:27.693 "firmware": 0, 00:26:27.693 "ns_manage": 0 00:26:27.693 }, 00:26:27.693 "multi_ctrlr": true, 00:26:27.693 "ana_reporting": false 00:26:27.693 }, 00:26:27.693 "vs": { 00:26:27.693 "nvme_version": "1.3" 00:26:27.693 }, 00:26:27.693 "ns_data": { 00:26:27.693 "id": 1, 00:26:27.693 "can_share": true 00:26:27.693 } 00:26:27.693 } 00:26:27.693 ], 00:26:27.693 "mp_policy": "active_passive" 00:26:27.693 } 00:26:27.693 } 00:26:27.693 ] 00:26:27.693 05:41:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.693 05:41:30 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.693 05:41:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.693 05:41:30 -- common/autotest_common.sh@10 -- # set +x 00:26:27.693 05:41:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.952 05:41:30 -- host/async_init.sh@53 -- # mktemp 00:26:27.952 05:41:30 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.h24s2c4e9N 00:26:27.952 05:41:30 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:27.952 05:41:30 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.h24s2c4e9N 00:26:27.952 05:41:30 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:26:27.952 05:41:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.952 05:41:30 -- common/autotest_common.sh@10 -- # set +x 00:26:27.952 05:41:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.952 05:41:30 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:26:27.952 05:41:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.952 05:41:30 -- common/autotest_common.sh@10 -- # set +x 00:26:27.952 [2024-12-07 05:41:30.955382] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:27.952 [2024-12-07 05:41:30.955517] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:27.952 05:41:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.952 05:41:30 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.h24s2c4e9N 00:26:27.952 05:41:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.952 05:41:30 -- common/autotest_common.sh@10 -- # set +x 00:26:27.952 05:41:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.952 05:41:30 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.h24s2c4e9N 00:26:27.952 05:41:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.952 05:41:30 -- common/autotest_common.sh@10 -- # set +x 00:26:27.952 [2024-12-07 05:41:30.979445] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:27.952 nvme0n1 00:26:27.952 05:41:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.952 05:41:31 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:27.952 05:41:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.952 05:41:31 -- common/autotest_common.sh@10 -- # set +x 00:26:27.952 [ 00:26:27.952 { 00:26:27.952 "name": "nvme0n1", 00:26:27.952 "aliases": [ 00:26:27.952 "c4612fcf-ddab-4dae-b0f9-8af90235a340" 00:26:27.952 ], 00:26:27.952 "product_name": "NVMe disk", 00:26:27.952 "block_size": 512, 00:26:27.952 "num_blocks": 2097152, 00:26:27.952 "uuid": "c4612fcf-ddab-4dae-b0f9-8af90235a340", 00:26:27.952 "assigned_rate_limits": { 00:26:27.952 "rw_ios_per_sec": 0, 00:26:27.952 "rw_mbytes_per_sec": 0, 00:26:27.952 "r_mbytes_per_sec": 0, 00:26:27.952 "w_mbytes_per_sec": 0 00:26:27.952 }, 00:26:27.952 "claimed": false, 00:26:27.952 "zoned": false, 00:26:27.952 "supported_io_types": { 00:26:27.953 "read": true, 00:26:27.953 "write": true, 00:26:27.953 "unmap": false, 00:26:27.953 "write_zeroes": true, 00:26:27.953 "flush": true, 00:26:27.953 "reset": true, 00:26:27.953 "compare": true, 00:26:27.953 "compare_and_write": true, 00:26:27.953 "abort": true, 00:26:27.953 "nvme_admin": true, 00:26:27.953 "nvme_io": true 00:26:27.953 }, 00:26:27.953 "driver_specific": { 00:26:27.953 "nvme": [ 00:26:27.953 { 00:26:27.953 "trid": { 00:26:27.953 "trtype": "TCP", 00:26:27.953 "adrfam": "IPv4", 00:26:27.953 "traddr": "10.0.0.2", 00:26:27.953 "trsvcid": "4421", 00:26:27.953 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:27.953 }, 00:26:27.953 "ctrlr_data": { 00:26:27.953 "cntlid": 3, 00:26:27.953 "vendor_id": "0x8086", 00:26:27.953 "model_number": "SPDK bdev Controller", 00:26:27.953 "serial_number": "00000000000000000000", 00:26:27.953 "firmware_revision": "24.01.1", 00:26:27.953 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:27.953 "oacs": { 00:26:27.953 "security": 0, 00:26:27.953 "format": 0, 00:26:27.953 "firmware": 0, 00:26:27.953 "ns_manage": 0 00:26:27.953 }, 00:26:27.953 "multi_ctrlr": true, 00:26:27.953 "ana_reporting": false 00:26:27.953 }, 00:26:27.953 "vs": { 00:26:27.953 "nvme_version": "1.3" 00:26:27.953 }, 00:26:27.953 "ns_data": { 00:26:27.953 "id": 1, 00:26:27.953 "can_share": true 00:26:27.953 } 00:26:27.953 } 00:26:27.953 ], 00:26:27.953 "mp_policy": "active_passive" 00:26:27.953 } 00:26:27.953 } 00:26:27.953 ] 00:26:27.953 05:41:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.953 05:41:31 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.953 05:41:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.953 05:41:31 -- common/autotest_common.sh@10 -- # set +x 00:26:27.953 05:41:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.953 05:41:31 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.h24s2c4e9N 00:26:27.953 05:41:31 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:26:27.953 05:41:31 -- host/async_init.sh@78 -- # nvmftestfini 00:26:27.953 05:41:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:27.953 05:41:31 -- nvmf/common.sh@116 -- # sync 00:26:27.953 05:41:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:27.953 05:41:31 -- nvmf/common.sh@119 -- # set +e 00:26:27.953 05:41:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:27.953 05:41:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:27.953 rmmod nvme_tcp 00:26:27.953 rmmod nvme_fabrics 00:26:27.953 rmmod nvme_keyring 00:26:27.953 05:41:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:27.953 05:41:31 -- nvmf/common.sh@123 -- # set -e 00:26:27.953 05:41:31 -- nvmf/common.sh@124 -- # return 0 00:26:27.953 05:41:31 -- nvmf/common.sh@477 -- # '[' -n 1952050 ']' 00:26:27.953 05:41:31 -- nvmf/common.sh@478 -- # killprocess 1952050 00:26:27.953 05:41:31 -- common/autotest_common.sh@936 -- # '[' -z 1952050 ']' 00:26:27.953 05:41:31 -- common/autotest_common.sh@940 -- # kill -0 1952050 00:26:27.953 05:41:31 -- common/autotest_common.sh@941 -- # uname 00:26:27.953 05:41:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:27.953 05:41:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1952050 00:26:28.212 05:41:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:28.212 05:41:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:28.212 05:41:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1952050' 00:26:28.212 killing process with pid 1952050 00:26:28.212 05:41:31 -- common/autotest_common.sh@955 -- # kill 1952050 00:26:28.212 05:41:31 -- common/autotest_common.sh@960 -- # wait 1952050 00:26:28.212 05:41:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:28.212 05:41:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:28.212 05:41:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:28.212 05:41:31 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:28.212 05:41:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:28.212 05:41:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.212 05:41:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:28.212 05:41:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:30.758 05:41:33 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:30.758 00:26:30.758 real 0m11.706s 00:26:30.758 user 0m4.219s 00:26:30.758 sys 0m5.962s 00:26:30.758 05:41:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:30.758 05:41:33 -- common/autotest_common.sh@10 -- # set +x 00:26:30.758 ************************************ 00:26:30.758 END TEST nvmf_async_init 00:26:30.758 ************************************ 00:26:30.758 05:41:33 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:30.758 05:41:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:30.758 05:41:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:30.758 05:41:33 -- common/autotest_common.sh@10 -- # set +x 00:26:30.758 ************************************ 00:26:30.758 START TEST dma 00:26:30.758 ************************************ 00:26:30.758 05:41:33 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:30.758 * Looking for test storage... 00:26:30.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:30.758 05:41:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:30.758 05:41:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:30.758 05:41:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:30.758 05:41:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:30.758 05:41:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:30.758 05:41:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:30.758 05:41:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:30.758 05:41:33 -- scripts/common.sh@335 -- # IFS=.-: 00:26:30.758 05:41:33 -- scripts/common.sh@335 -- # read -ra ver1 00:26:30.758 05:41:33 -- scripts/common.sh@336 -- # IFS=.-: 00:26:30.758 05:41:33 -- scripts/common.sh@336 -- # read -ra ver2 00:26:30.758 05:41:33 -- scripts/common.sh@337 -- # local 'op=<' 00:26:30.758 05:41:33 -- scripts/common.sh@339 -- # ver1_l=2 00:26:30.758 05:41:33 -- scripts/common.sh@340 -- # ver2_l=1 00:26:30.758 05:41:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:30.758 05:41:33 -- scripts/common.sh@343 -- # case "$op" in 00:26:30.758 05:41:33 -- scripts/common.sh@344 -- # : 1 00:26:30.758 05:41:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:30.758 05:41:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:30.758 05:41:33 -- scripts/common.sh@364 -- # decimal 1 00:26:30.758 05:41:33 -- scripts/common.sh@352 -- # local d=1 00:26:30.758 05:41:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:30.758 05:41:33 -- scripts/common.sh@354 -- # echo 1 00:26:30.758 05:41:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:30.758 05:41:33 -- scripts/common.sh@365 -- # decimal 2 00:26:30.758 05:41:33 -- scripts/common.sh@352 -- # local d=2 00:26:30.758 05:41:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:30.758 05:41:33 -- scripts/common.sh@354 -- # echo 2 00:26:30.758 05:41:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:30.758 05:41:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:30.758 05:41:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:30.758 05:41:33 -- scripts/common.sh@367 -- # return 0 00:26:30.758 05:41:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:30.758 05:41:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:30.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.758 --rc genhtml_branch_coverage=1 00:26:30.758 --rc genhtml_function_coverage=1 00:26:30.758 --rc genhtml_legend=1 00:26:30.758 --rc geninfo_all_blocks=1 00:26:30.758 --rc geninfo_unexecuted_blocks=1 00:26:30.758 00:26:30.758 ' 00:26:30.758 05:41:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:30.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.758 --rc genhtml_branch_coverage=1 00:26:30.758 --rc genhtml_function_coverage=1 00:26:30.758 --rc genhtml_legend=1 00:26:30.758 --rc geninfo_all_blocks=1 00:26:30.758 --rc geninfo_unexecuted_blocks=1 00:26:30.758 00:26:30.758 ' 00:26:30.758 05:41:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:30.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.758 --rc genhtml_branch_coverage=1 00:26:30.758 --rc genhtml_function_coverage=1 00:26:30.758 --rc genhtml_legend=1 00:26:30.758 --rc geninfo_all_blocks=1 00:26:30.758 --rc geninfo_unexecuted_blocks=1 00:26:30.758 00:26:30.758 ' 00:26:30.759 05:41:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:30.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.759 --rc genhtml_branch_coverage=1 00:26:30.759 --rc genhtml_function_coverage=1 00:26:30.759 --rc genhtml_legend=1 00:26:30.759 --rc geninfo_all_blocks=1 00:26:30.759 --rc geninfo_unexecuted_blocks=1 00:26:30.759 00:26:30.759 ' 00:26:30.759 05:41:33 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:30.759 05:41:33 -- nvmf/common.sh@7 -- # uname -s 00:26:30.759 05:41:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:30.759 05:41:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:30.759 05:41:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:30.759 05:41:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:30.759 05:41:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:30.759 05:41:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:30.759 05:41:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:30.759 05:41:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:30.759 05:41:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:30.759 05:41:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:30.759 05:41:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:30.759 05:41:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:30.759 05:41:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:30.759 05:41:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:30.759 05:41:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:30.759 05:41:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:30.759 05:41:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:30.759 05:41:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:30.759 05:41:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:30.759 05:41:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.759 05:41:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.759 05:41:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.759 05:41:33 -- paths/export.sh@5 -- # export PATH 00:26:30.759 05:41:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.759 05:41:33 -- nvmf/common.sh@46 -- # : 0 00:26:30.759 05:41:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:30.759 05:41:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:30.759 05:41:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:30.759 05:41:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:30.759 05:41:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:30.759 05:41:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:30.759 05:41:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:30.759 05:41:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:30.759 05:41:33 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:26:30.759 05:41:33 -- host/dma.sh@13 -- # exit 0 00:26:30.759 00:26:30.759 real 0m0.218s 00:26:30.759 user 0m0.131s 00:26:30.759 sys 0m0.097s 00:26:30.759 05:41:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:30.759 05:41:33 -- common/autotest_common.sh@10 -- # set +x 00:26:30.759 ************************************ 00:26:30.759 END TEST dma 00:26:30.759 ************************************ 00:26:30.759 05:41:33 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:30.759 05:41:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:30.759 05:41:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:30.759 05:41:33 -- common/autotest_common.sh@10 -- # set +x 00:26:30.759 ************************************ 00:26:30.759 START TEST nvmf_identify 00:26:30.759 ************************************ 00:26:30.759 05:41:33 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:30.759 * Looking for test storage... 00:26:30.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:30.759 05:41:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:30.759 05:41:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:30.759 05:41:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:30.759 05:41:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:30.759 05:41:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:30.759 05:41:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:30.759 05:41:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:30.759 05:41:33 -- scripts/common.sh@335 -- # IFS=.-: 00:26:30.759 05:41:33 -- scripts/common.sh@335 -- # read -ra ver1 00:26:30.759 05:41:33 -- scripts/common.sh@336 -- # IFS=.-: 00:26:30.759 05:41:33 -- scripts/common.sh@336 -- # read -ra ver2 00:26:30.759 05:41:33 -- scripts/common.sh@337 -- # local 'op=<' 00:26:30.759 05:41:33 -- scripts/common.sh@339 -- # ver1_l=2 00:26:30.759 05:41:33 -- scripts/common.sh@340 -- # ver2_l=1 00:26:30.759 05:41:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:30.759 05:41:33 -- scripts/common.sh@343 -- # case "$op" in 00:26:30.759 05:41:33 -- scripts/common.sh@344 -- # : 1 00:26:30.759 05:41:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:30.759 05:41:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:30.759 05:41:33 -- scripts/common.sh@364 -- # decimal 1 00:26:30.759 05:41:33 -- scripts/common.sh@352 -- # local d=1 00:26:30.759 05:41:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:30.759 05:41:33 -- scripts/common.sh@354 -- # echo 1 00:26:30.759 05:41:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:30.759 05:41:33 -- scripts/common.sh@365 -- # decimal 2 00:26:30.759 05:41:33 -- scripts/common.sh@352 -- # local d=2 00:26:30.759 05:41:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:30.759 05:41:33 -- scripts/common.sh@354 -- # echo 2 00:26:30.759 05:41:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:30.759 05:41:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:30.759 05:41:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:30.759 05:41:33 -- scripts/common.sh@367 -- # return 0 00:26:30.759 05:41:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:30.759 05:41:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:30.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.759 --rc genhtml_branch_coverage=1 00:26:30.759 --rc genhtml_function_coverage=1 00:26:30.759 --rc genhtml_legend=1 00:26:30.759 --rc geninfo_all_blocks=1 00:26:30.759 --rc geninfo_unexecuted_blocks=1 00:26:30.759 00:26:30.759 ' 00:26:30.759 05:41:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:30.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.759 --rc genhtml_branch_coverage=1 00:26:30.759 --rc genhtml_function_coverage=1 00:26:30.759 --rc genhtml_legend=1 00:26:30.759 --rc geninfo_all_blocks=1 00:26:30.759 --rc geninfo_unexecuted_blocks=1 00:26:30.759 00:26:30.759 ' 00:26:30.759 05:41:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:30.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.759 --rc genhtml_branch_coverage=1 00:26:30.759 --rc genhtml_function_coverage=1 00:26:30.759 --rc genhtml_legend=1 00:26:30.759 --rc geninfo_all_blocks=1 00:26:30.759 --rc geninfo_unexecuted_blocks=1 00:26:30.759 00:26:30.759 ' 00:26:30.759 05:41:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:30.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.759 --rc genhtml_branch_coverage=1 00:26:30.759 --rc genhtml_function_coverage=1 00:26:30.759 --rc genhtml_legend=1 00:26:30.759 --rc geninfo_all_blocks=1 00:26:30.759 --rc geninfo_unexecuted_blocks=1 00:26:30.759 00:26:30.759 ' 00:26:30.759 05:41:33 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:30.759 05:41:33 -- nvmf/common.sh@7 -- # uname -s 00:26:30.759 05:41:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:30.759 05:41:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:30.759 05:41:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:30.759 05:41:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:30.759 05:41:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:30.759 05:41:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:30.759 05:41:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:30.759 05:41:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:30.759 05:41:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:30.759 05:41:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:30.759 05:41:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:30.759 05:41:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:30.759 05:41:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:30.759 05:41:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:30.759 05:41:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:30.760 05:41:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:30.760 05:41:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:30.760 05:41:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:30.760 05:41:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:30.760 05:41:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.760 05:41:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.760 05:41:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.760 05:41:33 -- paths/export.sh@5 -- # export PATH 00:26:30.760 05:41:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.760 05:41:33 -- nvmf/common.sh@46 -- # : 0 00:26:30.760 05:41:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:30.760 05:41:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:30.760 05:41:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:30.760 05:41:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:30.760 05:41:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:30.760 05:41:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:30.760 05:41:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:30.760 05:41:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:30.760 05:41:33 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:30.760 05:41:33 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:30.760 05:41:33 -- host/identify.sh@14 -- # nvmftestinit 00:26:30.760 05:41:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:30.760 05:41:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:30.760 05:41:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:30.760 05:41:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:30.760 05:41:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:30.760 05:41:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.760 05:41:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:30.760 05:41:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:30.760 05:41:33 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:30.760 05:41:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:30.760 05:41:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:30.760 05:41:33 -- common/autotest_common.sh@10 -- # set +x 00:26:38.899 05:41:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:38.899 05:41:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:38.899 05:41:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:38.899 05:41:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:38.899 05:41:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:38.899 05:41:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:38.900 05:41:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:38.900 05:41:41 -- nvmf/common.sh@294 -- # net_devs=() 00:26:38.900 05:41:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:38.900 05:41:41 -- nvmf/common.sh@295 -- # e810=() 00:26:38.900 05:41:41 -- nvmf/common.sh@295 -- # local -ga e810 00:26:38.900 05:41:41 -- nvmf/common.sh@296 -- # x722=() 00:26:38.900 05:41:41 -- nvmf/common.sh@296 -- # local -ga x722 00:26:38.900 05:41:41 -- nvmf/common.sh@297 -- # mlx=() 00:26:38.900 05:41:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:38.900 05:41:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:38.900 05:41:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:38.900 05:41:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:38.900 05:41:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:38.900 05:41:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:38.900 05:41:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:38.900 05:41:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:38.900 05:41:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:38.900 05:41:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:38.900 05:41:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:38.900 05:41:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:38.900 05:41:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:38.900 05:41:41 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:38.900 05:41:41 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:38.900 05:41:41 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:38.900 05:41:41 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:38.900 05:41:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:38.900 05:41:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:38.900 05:41:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:38.900 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:38.900 05:41:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:38.900 05:41:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:38.900 05:41:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:38.900 05:41:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:38.900 05:41:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:38.900 05:41:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:38.900 05:41:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:38.900 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:38.900 05:41:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:38.900 05:41:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:38.900 05:41:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:38.900 05:41:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:38.900 05:41:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:38.900 05:41:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:38.900 05:41:41 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:38.900 05:41:41 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:38.900 05:41:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:38.900 05:41:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:38.900 05:41:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:38.900 05:41:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:38.900 05:41:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:38.900 Found net devices under 0000:31:00.0: cvl_0_0 00:26:38.900 05:41:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:38.900 05:41:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:38.900 05:41:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:38.900 05:41:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:38.900 05:41:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:38.900 05:41:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:38.900 Found net devices under 0000:31:00.1: cvl_0_1 00:26:38.900 05:41:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:38.900 05:41:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:38.900 05:41:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:38.900 05:41:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:38.900 05:41:41 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:38.900 05:41:41 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:38.900 05:41:41 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:38.900 05:41:41 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:38.900 05:41:41 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:38.900 05:41:41 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:38.900 05:41:41 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:38.900 05:41:41 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:38.900 05:41:41 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:38.900 05:41:41 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:38.900 05:41:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:38.900 05:41:41 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:38.900 05:41:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:38.900 05:41:41 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:38.900 05:41:41 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:38.900 05:41:41 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:38.900 05:41:41 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:38.900 05:41:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:38.900 05:41:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:38.900 05:41:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:38.900 05:41:41 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:38.900 05:41:41 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:38.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:38.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:26:38.900 00:26:38.900 --- 10.0.0.2 ping statistics --- 00:26:38.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.900 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:26:38.900 05:41:41 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:38.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:38.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:26:38.900 00:26:38.900 --- 10.0.0.1 ping statistics --- 00:26:38.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.900 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:26:38.900 05:41:41 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:38.900 05:41:41 -- nvmf/common.sh@410 -- # return 0 00:26:38.900 05:41:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:38.900 05:41:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:38.900 05:41:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:38.900 05:41:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:38.900 05:41:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:38.900 05:41:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:38.900 05:41:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:38.900 05:41:41 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:26:38.900 05:41:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:38.900 05:41:41 -- common/autotest_common.sh@10 -- # set +x 00:26:38.900 05:41:41 -- host/identify.sh@19 -- # nvmfpid=1956859 00:26:38.900 05:41:41 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:38.900 05:41:41 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:38.900 05:41:41 -- host/identify.sh@23 -- # waitforlisten 1956859 00:26:38.900 05:41:41 -- common/autotest_common.sh@829 -- # '[' -z 1956859 ']' 00:26:38.900 05:41:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:38.900 05:41:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:38.900 05:41:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:38.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:38.900 05:41:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:38.900 05:41:41 -- common/autotest_common.sh@10 -- # set +x 00:26:38.900 [2024-12-07 05:41:41.449885] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:38.900 [2024-12-07 05:41:41.449951] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:38.900 EAL: No free 2048 kB hugepages reported on node 1 00:26:38.900 [2024-12-07 05:41:41.527619] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:38.900 [2024-12-07 05:41:41.601456] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:38.900 [2024-12-07 05:41:41.601595] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:38.900 [2024-12-07 05:41:41.601606] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:38.900 [2024-12-07 05:41:41.601616] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:38.900 [2024-12-07 05:41:41.601760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:38.900 [2024-12-07 05:41:41.601891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:38.900 [2024-12-07 05:41:41.602246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:38.900 [2024-12-07 05:41:41.602341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.207 05:41:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:39.207 05:41:42 -- common/autotest_common.sh@862 -- # return 0 00:26:39.207 05:41:42 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:39.207 05:41:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.207 05:41:42 -- common/autotest_common.sh@10 -- # set +x 00:26:39.207 [2024-12-07 05:41:42.251146] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:39.207 05:41:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.207 05:41:42 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:26:39.207 05:41:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:39.207 05:41:42 -- common/autotest_common.sh@10 -- # set +x 00:26:39.207 05:41:42 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:39.207 05:41:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.207 05:41:42 -- common/autotest_common.sh@10 -- # set +x 00:26:39.207 Malloc0 00:26:39.207 05:41:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.207 05:41:42 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:39.207 05:41:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.207 05:41:42 -- common/autotest_common.sh@10 -- # set +x 00:26:39.207 05:41:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.207 05:41:42 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:26:39.207 05:41:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.207 05:41:42 -- common/autotest_common.sh@10 -- # set +x 00:26:39.207 05:41:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.207 05:41:42 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:39.207 05:41:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.207 05:41:42 -- common/autotest_common.sh@10 -- # set +x 00:26:39.207 [2024-12-07 05:41:42.350696] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:39.207 05:41:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.207 05:41:42 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:39.207 05:41:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.207 05:41:42 -- common/autotest_common.sh@10 -- # set +x 00:26:39.207 05:41:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.207 05:41:42 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:26:39.207 05:41:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.207 05:41:42 -- common/autotest_common.sh@10 -- # set +x 00:26:39.207 [2024-12-07 05:41:42.374511] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:39.207 [ 00:26:39.207 { 00:26:39.207 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:39.207 "subtype": "Discovery", 00:26:39.207 "listen_addresses": [ 00:26:39.207 { 00:26:39.207 "transport": "TCP", 00:26:39.207 "trtype": "TCP", 00:26:39.207 "adrfam": "IPv4", 00:26:39.207 "traddr": "10.0.0.2", 00:26:39.207 "trsvcid": "4420" 00:26:39.207 } 00:26:39.207 ], 00:26:39.207 "allow_any_host": true, 00:26:39.207 "hosts": [] 00:26:39.207 }, 00:26:39.207 { 00:26:39.207 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:39.207 "subtype": "NVMe", 00:26:39.207 "listen_addresses": [ 00:26:39.207 { 00:26:39.207 "transport": "TCP", 00:26:39.207 "trtype": "TCP", 00:26:39.207 "adrfam": "IPv4", 00:26:39.207 "traddr": "10.0.0.2", 00:26:39.207 "trsvcid": "4420" 00:26:39.207 } 00:26:39.207 ], 00:26:39.207 "allow_any_host": true, 00:26:39.207 "hosts": [], 00:26:39.207 "serial_number": "SPDK00000000000001", 00:26:39.207 "model_number": "SPDK bdev Controller", 00:26:39.207 "max_namespaces": 32, 00:26:39.207 "min_cntlid": 1, 00:26:39.207 "max_cntlid": 65519, 00:26:39.207 "namespaces": [ 00:26:39.207 { 00:26:39.207 "nsid": 1, 00:26:39.207 "bdev_name": "Malloc0", 00:26:39.207 "name": "Malloc0", 00:26:39.207 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:26:39.207 "eui64": "ABCDEF0123456789", 00:26:39.207 "uuid": "1da5dfcb-352b-4489-9084-02ce24d38b03" 00:26:39.207 } 00:26:39.207 ] 00:26:39.207 } 00:26:39.207 ] 00:26:39.207 05:41:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.207 05:41:42 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:26:39.207 [2024-12-07 05:41:42.411362] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:39.207 [2024-12-07 05:41:42.411403] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1956959 ] 00:26:39.207 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.496 [2024-12-07 05:41:42.444689] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:26:39.496 [2024-12-07 05:41:42.444734] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:39.496 [2024-12-07 05:41:42.444740] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:39.496 [2024-12-07 05:41:42.444752] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:39.496 [2024-12-07 05:41:42.444760] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:39.496 [2024-12-07 05:41:42.448044] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:26:39.496 [2024-12-07 05:41:42.448079] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x20d1db0 0 00:26:39.496 [2024-12-07 05:41:42.456022] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:39.496 [2024-12-07 05:41:42.456033] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:39.496 [2024-12-07 05:41:42.456038] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:39.496 [2024-12-07 05:41:42.456041] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:39.496 [2024-12-07 05:41:42.456074] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.496 [2024-12-07 05:41:42.456081] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.496 [2024-12-07 05:41:42.456085] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20d1db0) 00:26:39.496 [2024-12-07 05:41:42.456097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:39.496 [2024-12-07 05:41:42.456112] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2150d50, cid 0, qid 0 00:26:39.496 [2024-12-07 05:41:42.464022] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.496 [2024-12-07 05:41:42.464031] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.496 [2024-12-07 05:41:42.464035] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.496 [2024-12-07 05:41:42.464040] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2150d50) on tqpair=0x20d1db0 00:26:39.496 [2024-12-07 05:41:42.464050] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:39.496 [2024-12-07 05:41:42.464056] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:26:39.496 [2024-12-07 05:41:42.464065] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:26:39.496 [2024-12-07 05:41:42.464076] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.496 [2024-12-07 05:41:42.464080] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.496 [2024-12-07 05:41:42.464084] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20d1db0) 00:26:39.496 [2024-12-07 05:41:42.464091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.496 [2024-12-07 05:41:42.464104] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2150d50, cid 0, qid 0 00:26:39.496 [2024-12-07 05:41:42.464326] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.496 [2024-12-07 05:41:42.464332] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.496 [2024-12-07 05:41:42.464336] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.496 [2024-12-07 05:41:42.464339] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2150d50) on tqpair=0x20d1db0 00:26:39.496 [2024-12-07 05:41:42.464345] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:26:39.496 [2024-12-07 05:41:42.464352] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:26:39.496 [2024-12-07 05:41:42.464359] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.496 [2024-12-07 05:41:42.464363] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.496 [2024-12-07 05:41:42.464367] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20d1db0) 00:26:39.496 [2024-12-07 05:41:42.464373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.496 [2024-12-07 05:41:42.464384] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2150d50, cid 0, qid 0 00:26:39.496 [2024-12-07 05:41:42.464572] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.496 [2024-12-07 05:41:42.464578] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.496 [2024-12-07 05:41:42.464581] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.496 [2024-12-07 05:41:42.464585] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2150d50) on tqpair=0x20d1db0 00:26:39.497 [2024-12-07 05:41:42.464591] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:26:39.497 [2024-12-07 05:41:42.464599] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:26:39.497 [2024-12-07 05:41:42.464605] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.497 [2024-12-07 05:41:42.464609] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.497 [2024-12-07 05:41:42.464613] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20d1db0) 00:26:39.497 [2024-12-07 05:41:42.464619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-12-07 05:41:42.464629] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2150d50, cid 0, qid 0 00:26:39.497 [2024-12-07 05:41:42.464823] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.497 [2024-12-07 05:41:42.464830] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.497 [2024-12-07 05:41:42.464833] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.497 [2024-12-07 05:41:42.464837] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2150d50) on tqpair=0x20d1db0 00:26:39.497 [2024-12-07 05:41:42.464842] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:39.497 [2024-12-07 05:41:42.464854] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.497 [2024-12-07 05:41:42.464859] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.497 [2024-12-07 05:41:42.464863] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20d1db0) 00:26:39.497 [2024-12-07 05:41:42.464870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-12-07 05:41:42.464880] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2150d50, cid 0, qid 0 00:26:39.497 [2024-12-07 05:41:42.465086] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.497 [2024-12-07 05:41:42.465093] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.497 [2024-12-07 05:41:42.465097] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.497 [2024-12-07 05:41:42.465100] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2150d50) on tqpair=0x20d1db0 00:26:39.497 [2024-12-07 05:41:42.465106] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:26:39.497 [2024-12-07 05:41:42.465111] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:26:39.497 [2024-12-07 05:41:42.465118] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:39.497 [2024-12-07 05:41:42.465223] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:26:39.497 [2024-12-07 05:41:42.465228] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:39.497 [2024-12-07 05:41:42.465236] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.497 [2024-12-07 05:41:42.465239] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.497 [2024-12-07 05:41:42.465243] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20d1db0) 00:26:39.497 [2024-12-07 05:41:42.465250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-12-07 05:41:42.465260] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2150d50, cid 0, qid 0 00:26:39.497 [2024-12-07 05:41:42.465477] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.497 [2024-12-07 05:41:42.465483] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.497 [2024-12-07 05:41:42.465486] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.497 [2024-12-07 05:41:42.465490] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2150d50) on tqpair=0x20d1db0 00:26:39.497 [2024-12-07 05:41:42.465495] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:39.497 [2024-12-07 05:41:42.465504] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.497 [2024-12-07 05:41:42.465508] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.497 [2024-12-07 05:41:42.465512] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20d1db0) 00:26:39.497 [2024-12-07 05:41:42.465519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-12-07 05:41:42.465528] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2150d50, cid 0, qid 0 00:26:39.497 [2024-12-07 05:41:42.465726] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.497 [2024-12-07 05:41:42.465732] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.497 [2024-12-07 05:41:42.465736] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.497 [2024-12-07 05:41:42.465740] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2150d50) on tqpair=0x20d1db0 00:26:39.497 [2024-12-07 05:41:42.465745] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:39.497 [2024-12-07 05:41:42.465752] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:26:39.497 [2024-12-07 05:41:42.465759] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:26:39.497 [2024-12-07 05:41:42.465767] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:26:39.497 [2024-12-07 05:41:42.465775] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.497 [2024-12-07 05:41:42.465779] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.497 [2024-12-07 05:41:42.465782] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20d1db0) 00:26:39.497 [2024-12-07 05:41:42.465789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-12-07 05:41:42.465800] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2150d50, cid 0, qid 0 00:26:39.497 [2024-12-07 05:41:42.466026] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:39.497 [2024-12-07 05:41:42.466033] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:39.497 [2024-12-07 05:41:42.466037] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:39.497 [2024-12-07 05:41:42.466041] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20d1db0): datao=0, datal=4096, cccid=0 00:26:39.497 [2024-12-07 05:41:42.466045] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2150d50) on tqpair(0x20d1db0): expected_datao=0, payload_size=4096 00:26:39.497 [2024-12-07 05:41:42.466061] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:39.497 [2024-12-07 05:41:42.466066] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:39.497 [2024-12-07 05:41:42.507175] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.497 [2024-12-07 05:41:42.507185] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.497 [2024-12-07 05:41:42.507189] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.497 [2024-12-07 05:41:42.507193] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2150d50) on tqpair=0x20d1db0 00:26:39.497 [2024-12-07 05:41:42.507202] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:26:39.497 [2024-12-07 05:41:42.507207] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:26:39.497 [2024-12-07 05:41:42.507212] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:26:39.497 [2024-12-07 05:41:42.507217] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:26:39.497 [2024-12-07 05:41:42.507221] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:26:39.497 [2024-12-07 05:41:42.507226] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:26:39.497 [2024-12-07 05:41:42.507237] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:26:39.497 [2024-12-07 05:41:42.507244] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.497 [2024-12-07 05:41:42.507248] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.497 [2024-12-07 05:41:42.507252] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20d1db0) 00:26:39.497 [2024-12-07 05:41:42.507259] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:39.497 [2024-12-07 05:41:42.507271] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2150d50, cid 0, qid 0 00:26:39.497 [2024-12-07 05:41:42.507492] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.497 [2024-12-07 05:41:42.507498] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.497 [2024-12-07 05:41:42.507502] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.497 [2024-12-07 05:41:42.507506] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2150d50) on tqpair=0x20d1db0 00:26:39.497 [2024-12-07 05:41:42.507513] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.497 [2024-12-07 05:41:42.507517] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.497 [2024-12-07 05:41:42.507521] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20d1db0) 00:26:39.497 [2024-12-07 05:41:42.507528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.497 [2024-12-07 05:41:42.507535] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.497 [2024-12-07 05:41:42.507538] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.497 [2024-12-07 05:41:42.507542] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x20d1db0) 00:26:39.497 [2024-12-07 05:41:42.507548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.497 [2024-12-07 05:41:42.507554] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.497 [2024-12-07 05:41:42.507558] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.497 [2024-12-07 05:41:42.507561] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x20d1db0) 00:26:39.497 [2024-12-07 05:41:42.507567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.497 [2024-12-07 05:41:42.507574] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.497 [2024-12-07 05:41:42.507577] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.497 [2024-12-07 05:41:42.507581] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20d1db0) 00:26:39.497 [2024-12-07 05:41:42.507587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.498 [2024-12-07 05:41:42.507591] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:26:39.498 [2024-12-07 05:41:42.507602] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:39.498 [2024-12-07 05:41:42.507608] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.498 [2024-12-07 05:41:42.507612] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.498 [2024-12-07 05:41:42.507615] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20d1db0) 00:26:39.498 [2024-12-07 05:41:42.507622] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-12-07 05:41:42.507634] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2150d50, cid 0, qid 0 00:26:39.498 [2024-12-07 05:41:42.507639] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2150eb0, cid 1, qid 0 00:26:39.498 [2024-12-07 05:41:42.507644] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151010, cid 2, qid 0 00:26:39.498 [2024-12-07 05:41:42.507649] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151170, cid 3, qid 0 00:26:39.498 [2024-12-07 05:41:42.507654] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21512d0, cid 4, qid 0 00:26:39.498 [2024-12-07 05:41:42.507874] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.498 [2024-12-07 05:41:42.507881] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.498 [2024-12-07 05:41:42.507885] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.498 [2024-12-07 05:41:42.507890] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21512d0) on tqpair=0x20d1db0 00:26:39.498 [2024-12-07 05:41:42.507896] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:26:39.498 [2024-12-07 05:41:42.507901] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:26:39.498 [2024-12-07 05:41:42.507912] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.498 [2024-12-07 05:41:42.507916] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.498 [2024-12-07 05:41:42.507919] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20d1db0) 00:26:39.498 [2024-12-07 05:41:42.507926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-12-07 05:41:42.507936] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21512d0, cid 4, qid 0 00:26:39.498 [2024-12-07 05:41:42.512022] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:39.498 [2024-12-07 05:41:42.512030] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:39.498 [2024-12-07 05:41:42.512033] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:39.498 [2024-12-07 05:41:42.512037] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20d1db0): datao=0, datal=4096, cccid=4 00:26:39.498 [2024-12-07 05:41:42.512042] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21512d0) on tqpair(0x20d1db0): expected_datao=0, payload_size=4096 00:26:39.498 [2024-12-07 05:41:42.512049] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:39.498 [2024-12-07 05:41:42.512054] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:39.498 [2024-12-07 05:41:42.512059] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.498 [2024-12-07 05:41:42.512065] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.498 [2024-12-07 05:41:42.512069] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.498 [2024-12-07 05:41:42.512072] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21512d0) on tqpair=0x20d1db0 00:26:39.498 [2024-12-07 05:41:42.512084] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:26:39.498 [2024-12-07 05:41:42.512106] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.498 [2024-12-07 05:41:42.512110] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.498 [2024-12-07 05:41:42.512114] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20d1db0) 00:26:39.498 [2024-12-07 05:41:42.512120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-12-07 05:41:42.512128] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.498 [2024-12-07 05:41:42.512131] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.498 [2024-12-07 05:41:42.512135] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20d1db0) 00:26:39.498 [2024-12-07 05:41:42.512141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.498 [2024-12-07 05:41:42.512155] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21512d0, cid 4, qid 0 00:26:39.498 [2024-12-07 05:41:42.512160] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151430, cid 5, qid 0 00:26:39.498 [2024-12-07 05:41:42.512420] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:39.498 [2024-12-07 05:41:42.512427] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:39.498 [2024-12-07 05:41:42.512430] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:39.498 [2024-12-07 05:41:42.512434] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20d1db0): datao=0, datal=1024, cccid=4 00:26:39.498 [2024-12-07 05:41:42.512440] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21512d0) on tqpair(0x20d1db0): expected_datao=0, payload_size=1024 00:26:39.498 [2024-12-07 05:41:42.512447] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:39.498 [2024-12-07 05:41:42.512451] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:39.498 [2024-12-07 05:41:42.512457] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.498 [2024-12-07 05:41:42.512463] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.498 [2024-12-07 05:41:42.512466] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.498 [2024-12-07 05:41:42.512470] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151430) on tqpair=0x20d1db0 00:26:39.498 [2024-12-07 05:41:42.553208] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.498 [2024-12-07 05:41:42.553217] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.498 [2024-12-07 05:41:42.553221] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.498 [2024-12-07 05:41:42.553225] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21512d0) on tqpair=0x20d1db0 00:26:39.498 [2024-12-07 05:41:42.553237] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.498 [2024-12-07 05:41:42.553242] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.498 [2024-12-07 05:41:42.553245] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20d1db0) 00:26:39.498 [2024-12-07 05:41:42.553252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-12-07 05:41:42.553267] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21512d0, cid 4, qid 0 00:26:39.498 [2024-12-07 05:41:42.553473] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:39.498 [2024-12-07 05:41:42.553479] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:39.498 [2024-12-07 05:41:42.553483] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:39.498 [2024-12-07 05:41:42.553486] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20d1db0): datao=0, datal=3072, cccid=4 00:26:39.498 [2024-12-07 05:41:42.553491] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21512d0) on tqpair(0x20d1db0): expected_datao=0, payload_size=3072 00:26:39.498 [2024-12-07 05:41:42.553508] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:39.498 [2024-12-07 05:41:42.553512] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:39.498 [2024-12-07 05:41:42.594206] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.498 [2024-12-07 05:41:42.594217] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.498 [2024-12-07 05:41:42.594220] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.498 [2024-12-07 05:41:42.594224] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21512d0) on tqpair=0x20d1db0 00:26:39.498 [2024-12-07 05:41:42.594235] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.498 [2024-12-07 05:41:42.594238] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.498 [2024-12-07 05:41:42.594242] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20d1db0) 00:26:39.498 [2024-12-07 05:41:42.594249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-12-07 05:41:42.594263] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21512d0, cid 4, qid 0 00:26:39.498 [2024-12-07 05:41:42.594453] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:39.498 [2024-12-07 05:41:42.594460] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:39.498 [2024-12-07 05:41:42.594463] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:39.498 [2024-12-07 05:41:42.594467] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20d1db0): datao=0, datal=8, cccid=4 00:26:39.498 [2024-12-07 05:41:42.594471] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21512d0) on tqpair(0x20d1db0): expected_datao=0, payload_size=8 00:26:39.498 [2024-12-07 05:41:42.594482] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:39.498 [2024-12-07 05:41:42.594486] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:39.498 [2024-12-07 05:41:42.638020] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.498 [2024-12-07 05:41:42.638031] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.498 [2024-12-07 05:41:42.638035] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.498 [2024-12-07 05:41:42.638039] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21512d0) on tqpair=0x20d1db0 00:26:39.498 ===================================================== 00:26:39.498 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:39.498 ===================================================== 00:26:39.498 Controller Capabilities/Features 00:26:39.498 ================================ 00:26:39.498 Vendor ID: 0000 00:26:39.498 Subsystem Vendor ID: 0000 00:26:39.498 Serial Number: .................... 00:26:39.498 Model Number: ........................................ 00:26:39.498 Firmware Version: 24.01.1 00:26:39.498 Recommended Arb Burst: 0 00:26:39.498 IEEE OUI Identifier: 00 00 00 00:26:39.498 Multi-path I/O 00:26:39.498 May have multiple subsystem ports: No 00:26:39.498 May have multiple controllers: No 00:26:39.498 Associated with SR-IOV VF: No 00:26:39.498 Max Data Transfer Size: 131072 00:26:39.498 Max Number of Namespaces: 0 00:26:39.498 Max Number of I/O Queues: 1024 00:26:39.498 NVMe Specification Version (VS): 1.3 00:26:39.498 NVMe Specification Version (Identify): 1.3 00:26:39.498 Maximum Queue Entries: 128 00:26:39.498 Contiguous Queues Required: Yes 00:26:39.498 Arbitration Mechanisms Supported 00:26:39.499 Weighted Round Robin: Not Supported 00:26:39.499 Vendor Specific: Not Supported 00:26:39.499 Reset Timeout: 15000 ms 00:26:39.499 Doorbell Stride: 4 bytes 00:26:39.499 NVM Subsystem Reset: Not Supported 00:26:39.499 Command Sets Supported 00:26:39.499 NVM Command Set: Supported 00:26:39.499 Boot Partition: Not Supported 00:26:39.499 Memory Page Size Minimum: 4096 bytes 00:26:39.499 Memory Page Size Maximum: 4096 bytes 00:26:39.499 Persistent Memory Region: Not Supported 00:26:39.499 Optional Asynchronous Events Supported 00:26:39.499 Namespace Attribute Notices: Not Supported 00:26:39.499 Firmware Activation Notices: Not Supported 00:26:39.499 ANA Change Notices: Not Supported 00:26:39.499 PLE Aggregate Log Change Notices: Not Supported 00:26:39.499 LBA Status Info Alert Notices: Not Supported 00:26:39.499 EGE Aggregate Log Change Notices: Not Supported 00:26:39.499 Normal NVM Subsystem Shutdown event: Not Supported 00:26:39.499 Zone Descriptor Change Notices: Not Supported 00:26:39.499 Discovery Log Change Notices: Supported 00:26:39.499 Controller Attributes 00:26:39.499 128-bit Host Identifier: Not Supported 00:26:39.499 Non-Operational Permissive Mode: Not Supported 00:26:39.499 NVM Sets: Not Supported 00:26:39.499 Read Recovery Levels: Not Supported 00:26:39.499 Endurance Groups: Not Supported 00:26:39.499 Predictable Latency Mode: Not Supported 00:26:39.499 Traffic Based Keep ALive: Not Supported 00:26:39.499 Namespace Granularity: Not Supported 00:26:39.499 SQ Associations: Not Supported 00:26:39.499 UUID List: Not Supported 00:26:39.499 Multi-Domain Subsystem: Not Supported 00:26:39.499 Fixed Capacity Management: Not Supported 00:26:39.499 Variable Capacity Management: Not Supported 00:26:39.499 Delete Endurance Group: Not Supported 00:26:39.499 Delete NVM Set: Not Supported 00:26:39.499 Extended LBA Formats Supported: Not Supported 00:26:39.499 Flexible Data Placement Supported: Not Supported 00:26:39.499 00:26:39.499 Controller Memory Buffer Support 00:26:39.499 ================================ 00:26:39.499 Supported: No 00:26:39.499 00:26:39.499 Persistent Memory Region Support 00:26:39.499 ================================ 00:26:39.499 Supported: No 00:26:39.499 00:26:39.499 Admin Command Set Attributes 00:26:39.499 ============================ 00:26:39.499 Security Send/Receive: Not Supported 00:26:39.499 Format NVM: Not Supported 00:26:39.499 Firmware Activate/Download: Not Supported 00:26:39.499 Namespace Management: Not Supported 00:26:39.499 Device Self-Test: Not Supported 00:26:39.499 Directives: Not Supported 00:26:39.499 NVMe-MI: Not Supported 00:26:39.499 Virtualization Management: Not Supported 00:26:39.499 Doorbell Buffer Config: Not Supported 00:26:39.499 Get LBA Status Capability: Not Supported 00:26:39.499 Command & Feature Lockdown Capability: Not Supported 00:26:39.499 Abort Command Limit: 1 00:26:39.499 Async Event Request Limit: 4 00:26:39.499 Number of Firmware Slots: N/A 00:26:39.499 Firmware Slot 1 Read-Only: N/A 00:26:39.499 Firmware Activation Without Reset: N/A 00:26:39.499 Multiple Update Detection Support: N/A 00:26:39.499 Firmware Update Granularity: No Information Provided 00:26:39.499 Per-Namespace SMART Log: No 00:26:39.499 Asymmetric Namespace Access Log Page: Not Supported 00:26:39.499 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:39.499 Command Effects Log Page: Not Supported 00:26:39.499 Get Log Page Extended Data: Supported 00:26:39.499 Telemetry Log Pages: Not Supported 00:26:39.499 Persistent Event Log Pages: Not Supported 00:26:39.499 Supported Log Pages Log Page: May Support 00:26:39.499 Commands Supported & Effects Log Page: Not Supported 00:26:39.499 Feature Identifiers & Effects Log Page:May Support 00:26:39.499 NVMe-MI Commands & Effects Log Page: May Support 00:26:39.499 Data Area 4 for Telemetry Log: Not Supported 00:26:39.499 Error Log Page Entries Supported: 128 00:26:39.499 Keep Alive: Not Supported 00:26:39.499 00:26:39.499 NVM Command Set Attributes 00:26:39.499 ========================== 00:26:39.499 Submission Queue Entry Size 00:26:39.499 Max: 1 00:26:39.499 Min: 1 00:26:39.499 Completion Queue Entry Size 00:26:39.499 Max: 1 00:26:39.499 Min: 1 00:26:39.499 Number of Namespaces: 0 00:26:39.499 Compare Command: Not Supported 00:26:39.499 Write Uncorrectable Command: Not Supported 00:26:39.499 Dataset Management Command: Not Supported 00:26:39.499 Write Zeroes Command: Not Supported 00:26:39.499 Set Features Save Field: Not Supported 00:26:39.499 Reservations: Not Supported 00:26:39.499 Timestamp: Not Supported 00:26:39.499 Copy: Not Supported 00:26:39.499 Volatile Write Cache: Not Present 00:26:39.499 Atomic Write Unit (Normal): 1 00:26:39.499 Atomic Write Unit (PFail): 1 00:26:39.499 Atomic Compare & Write Unit: 1 00:26:39.499 Fused Compare & Write: Supported 00:26:39.499 Scatter-Gather List 00:26:39.499 SGL Command Set: Supported 00:26:39.499 SGL Keyed: Supported 00:26:39.499 SGL Bit Bucket Descriptor: Not Supported 00:26:39.499 SGL Metadata Pointer: Not Supported 00:26:39.499 Oversized SGL: Not Supported 00:26:39.499 SGL Metadata Address: Not Supported 00:26:39.499 SGL Offset: Supported 00:26:39.499 Transport SGL Data Block: Not Supported 00:26:39.499 Replay Protected Memory Block: Not Supported 00:26:39.499 00:26:39.499 Firmware Slot Information 00:26:39.499 ========================= 00:26:39.499 Active slot: 0 00:26:39.499 00:26:39.499 00:26:39.499 Error Log 00:26:39.499 ========= 00:26:39.499 00:26:39.499 Active Namespaces 00:26:39.499 ================= 00:26:39.499 Discovery Log Page 00:26:39.499 ================== 00:26:39.499 Generation Counter: 2 00:26:39.499 Number of Records: 2 00:26:39.499 Record Format: 0 00:26:39.499 00:26:39.499 Discovery Log Entry 0 00:26:39.499 ---------------------- 00:26:39.499 Transport Type: 3 (TCP) 00:26:39.499 Address Family: 1 (IPv4) 00:26:39.499 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:39.499 Entry Flags: 00:26:39.499 Duplicate Returned Information: 1 00:26:39.499 Explicit Persistent Connection Support for Discovery: 1 00:26:39.499 Transport Requirements: 00:26:39.499 Secure Channel: Not Required 00:26:39.499 Port ID: 0 (0x0000) 00:26:39.499 Controller ID: 65535 (0xffff) 00:26:39.499 Admin Max SQ Size: 128 00:26:39.499 Transport Service Identifier: 4420 00:26:39.499 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:39.499 Transport Address: 10.0.0.2 00:26:39.499 Discovery Log Entry 1 00:26:39.499 ---------------------- 00:26:39.499 Transport Type: 3 (TCP) 00:26:39.499 Address Family: 1 (IPv4) 00:26:39.499 Subsystem Type: 2 (NVM Subsystem) 00:26:39.499 Entry Flags: 00:26:39.499 Duplicate Returned Information: 0 00:26:39.499 Explicit Persistent Connection Support for Discovery: 0 00:26:39.499 Transport Requirements: 00:26:39.499 Secure Channel: Not Required 00:26:39.499 Port ID: 0 (0x0000) 00:26:39.499 Controller ID: 65535 (0xffff) 00:26:39.499 Admin Max SQ Size: 128 00:26:39.499 Transport Service Identifier: 4420 00:26:39.499 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:26:39.499 Transport Address: 10.0.0.2 [2024-12-07 05:41:42.638125] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:26:39.499 [2024-12-07 05:41:42.638138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.499 [2024-12-07 05:41:42.638145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.499 [2024-12-07 05:41:42.638152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.499 [2024-12-07 05:41:42.638158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.499 [2024-12-07 05:41:42.638166] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.499 [2024-12-07 05:41:42.638170] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.499 [2024-12-07 05:41:42.638174] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20d1db0) 00:26:39.499 [2024-12-07 05:41:42.638181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.499 [2024-12-07 05:41:42.638194] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151170, cid 3, qid 0 00:26:39.499 [2024-12-07 05:41:42.638311] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.499 [2024-12-07 05:41:42.638318] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.499 [2024-12-07 05:41:42.638321] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.499 [2024-12-07 05:41:42.638325] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151170) on tqpair=0x20d1db0 00:26:39.499 [2024-12-07 05:41:42.638333] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.500 [2024-12-07 05:41:42.638337] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.500 [2024-12-07 05:41:42.638341] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20d1db0) 00:26:39.500 [2024-12-07 05:41:42.638347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.500 [2024-12-07 05:41:42.638360] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151170, cid 3, qid 0 00:26:39.500 [2024-12-07 05:41:42.638562] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.500 [2024-12-07 05:41:42.638568] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.500 [2024-12-07 05:41:42.638571] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.500 [2024-12-07 05:41:42.638575] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151170) on tqpair=0x20d1db0 00:26:39.500 [2024-12-07 05:41:42.638581] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:26:39.500 [2024-12-07 05:41:42.638585] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:26:39.500 [2024-12-07 05:41:42.638595] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.500 [2024-12-07 05:41:42.638598] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.500 [2024-12-07 05:41:42.638602] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20d1db0) 00:26:39.500 [2024-12-07 05:41:42.638609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.500 [2024-12-07 05:41:42.638621] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151170, cid 3, qid 0 00:26:39.500 [2024-12-07 05:41:42.638815] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.500 [2024-12-07 05:41:42.638821] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.500 [2024-12-07 05:41:42.638824] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.500 [2024-12-07 05:41:42.638828] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151170) on tqpair=0x20d1db0 00:26:39.500 [2024-12-07 05:41:42.638839] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.500 [2024-12-07 05:41:42.638842] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.500 [2024-12-07 05:41:42.638846] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20d1db0) 00:26:39.500 [2024-12-07 05:41:42.638853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.500 [2024-12-07 05:41:42.638863] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151170, cid 3, qid 0 00:26:39.500 [2024-12-07 05:41:42.639022] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.500 [2024-12-07 05:41:42.639029] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.500 [2024-12-07 05:41:42.639033] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.500 [2024-12-07 05:41:42.639036] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151170) on tqpair=0x20d1db0 00:26:39.500 [2024-12-07 05:41:42.639047] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.500 [2024-12-07 05:41:42.639050] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.500 [2024-12-07 05:41:42.639054] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20d1db0) 00:26:39.500 [2024-12-07 05:41:42.639061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.500 [2024-12-07 05:41:42.639071] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151170, cid 3, qid 0 00:26:39.500 [2024-12-07 05:41:42.639318] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.500 [2024-12-07 05:41:42.639324] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.500 [2024-12-07 05:41:42.639327] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.500 [2024-12-07 05:41:42.639331] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151170) on tqpair=0x20d1db0 00:26:39.500 [2024-12-07 05:41:42.639341] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.500 [2024-12-07 05:41:42.639345] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.500 [2024-12-07 05:41:42.639349] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20d1db0) 00:26:39.500 [2024-12-07 05:41:42.639355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.500 [2024-12-07 05:41:42.639365] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151170, cid 3, qid 0 00:26:39.500 [2024-12-07 05:41:42.639568] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.500 [2024-12-07 05:41:42.639574] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.500 [2024-12-07 05:41:42.639578] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.500 [2024-12-07 05:41:42.639582] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151170) on tqpair=0x20d1db0 00:26:39.500 [2024-12-07 05:41:42.639592] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.500 [2024-12-07 05:41:42.639596] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.500 [2024-12-07 05:41:42.639599] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20d1db0) 00:26:39.500 [2024-12-07 05:41:42.639606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.500 [2024-12-07 05:41:42.639618] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151170, cid 3, qid 0 00:26:39.500 [2024-12-07 05:41:42.639821] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.500 [2024-12-07 05:41:42.639827] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.500 [2024-12-07 05:41:42.639831] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.500 [2024-12-07 05:41:42.639835] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151170) on tqpair=0x20d1db0 00:26:39.500 [2024-12-07 05:41:42.639845] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.500 [2024-12-07 05:41:42.639848] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.500 [2024-12-07 05:41:42.639852] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20d1db0) 00:26:39.500 [2024-12-07 05:41:42.639859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.500 [2024-12-07 05:41:42.639869] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151170, cid 3, qid 0 00:26:39.500 [2024-12-07 05:41:42.640074] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.500 [2024-12-07 05:41:42.640081] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.500 [2024-12-07 05:41:42.640085] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.500 [2024-12-07 05:41:42.640088] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151170) on tqpair=0x20d1db0 00:26:39.500 [2024-12-07 05:41:42.640099] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.500 [2024-12-07 05:41:42.640102] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.500 [2024-12-07 05:41:42.640106] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20d1db0) 00:26:39.500 [2024-12-07 05:41:42.640113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.500 [2024-12-07 05:41:42.640123] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151170, cid 3, qid 0 00:26:39.500 [2024-12-07 05:41:42.640375] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.500 [2024-12-07 05:41:42.640381] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.500 [2024-12-07 05:41:42.640385] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.500 [2024-12-07 05:41:42.640389] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151170) on tqpair=0x20d1db0 00:26:39.500 [2024-12-07 05:41:42.640399] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.500 [2024-12-07 05:41:42.640403] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.500 [2024-12-07 05:41:42.640406] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20d1db0) 00:26:39.500 [2024-12-07 05:41:42.640413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.500 [2024-12-07 05:41:42.640423] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151170, cid 3, qid 0 00:26:39.500 [2024-12-07 05:41:42.640628] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.500 [2024-12-07 05:41:42.640634] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.500 [2024-12-07 05:41:42.640637] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.500 [2024-12-07 05:41:42.640641] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151170) on tqpair=0x20d1db0 00:26:39.500 [2024-12-07 05:41:42.640651] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.500 [2024-12-07 05:41:42.640655] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.500 [2024-12-07 05:41:42.640659] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20d1db0) 00:26:39.500 [2024-12-07 05:41:42.640665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.500 [2024-12-07 05:41:42.640675] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151170, cid 3, qid 0 00:26:39.500 [2024-12-07 05:41:42.640932] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.500 [2024-12-07 05:41:42.640938] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.500 [2024-12-07 05:41:42.640942] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.500 [2024-12-07 05:41:42.640945] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151170) on tqpair=0x20d1db0 00:26:39.500 [2024-12-07 05:41:42.640956] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.500 [2024-12-07 05:41:42.640959] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.500 [2024-12-07 05:41:42.640963] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20d1db0) 00:26:39.500 [2024-12-07 05:41:42.640970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.500 [2024-12-07 05:41:42.640979] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151170, cid 3, qid 0 00:26:39.500 [2024-12-07 05:41:42.641147] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.500 [2024-12-07 05:41:42.641154] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.500 [2024-12-07 05:41:42.641157] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.500 [2024-12-07 05:41:42.641161] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151170) on tqpair=0x20d1db0 00:26:39.500 [2024-12-07 05:41:42.641171] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.500 [2024-12-07 05:41:42.641175] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.500 [2024-12-07 05:41:42.641179] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20d1db0) 00:26:39.500 [2024-12-07 05:41:42.641185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.500 [2024-12-07 05:41:42.641196] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151170, cid 3, qid 0 00:26:39.500 [2024-12-07 05:41:42.641433] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.501 [2024-12-07 05:41:42.641439] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.501 [2024-12-07 05:41:42.641443] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.501 [2024-12-07 05:41:42.641447] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151170) on tqpair=0x20d1db0 00:26:39.501 [2024-12-07 05:41:42.641457] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.501 [2024-12-07 05:41:42.641461] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.501 [2024-12-07 05:41:42.641464] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20d1db0) 00:26:39.501 [2024-12-07 05:41:42.641471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.501 [2024-12-07 05:41:42.641481] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151170, cid 3, qid 0 00:26:39.501 [2024-12-07 05:41:42.641686] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.501 [2024-12-07 05:41:42.641692] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.501 [2024-12-07 05:41:42.641695] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.501 [2024-12-07 05:41:42.641699] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151170) on tqpair=0x20d1db0 00:26:39.501 [2024-12-07 05:41:42.641709] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.501 [2024-12-07 05:41:42.641713] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.501 [2024-12-07 05:41:42.641717] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20d1db0) 00:26:39.501 [2024-12-07 05:41:42.641723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.501 [2024-12-07 05:41:42.641733] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151170, cid 3, qid 0 00:26:39.501 [2024-12-07 05:41:42.641939] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.501 [2024-12-07 05:41:42.641946] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.501 [2024-12-07 05:41:42.641949] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.501 [2024-12-07 05:41:42.641953] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151170) on tqpair=0x20d1db0 00:26:39.501 [2024-12-07 05:41:42.641963] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.501 [2024-12-07 05:41:42.641967] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.501 [2024-12-07 05:41:42.641970] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20d1db0) 00:26:39.501 [2024-12-07 05:41:42.641977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.501 [2024-12-07 05:41:42.641987] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151170, cid 3, qid 0 00:26:39.501 [2024-12-07 05:41:42.646019] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.501 [2024-12-07 05:41:42.646027] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.501 [2024-12-07 05:41:42.646031] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.501 [2024-12-07 05:41:42.646035] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151170) on tqpair=0x20d1db0 00:26:39.501 [2024-12-07 05:41:42.646043] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:26:39.501 00:26:39.501 05:41:42 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:26:39.501 [2024-12-07 05:41:42.682929] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:39.501 [2024-12-07 05:41:42.682973] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1957096 ] 00:26:39.501 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.501 [2024-12-07 05:41:42.715591] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:26:39.501 [2024-12-07 05:41:42.715632] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:39.501 [2024-12-07 05:41:42.715637] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:39.501 [2024-12-07 05:41:42.715650] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:39.501 [2024-12-07 05:41:42.715657] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:39.501 [2024-12-07 05:41:42.719050] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:26:39.501 [2024-12-07 05:41:42.719079] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x15e6db0 0 00:26:39.764 [2024-12-07 05:41:42.727021] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:39.764 [2024-12-07 05:41:42.727032] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:39.764 [2024-12-07 05:41:42.727036] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:39.764 [2024-12-07 05:41:42.727039] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:39.764 [2024-12-07 05:41:42.727070] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.764 [2024-12-07 05:41:42.727076] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.764 [2024-12-07 05:41:42.727080] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15e6db0) 00:26:39.764 [2024-12-07 05:41:42.727092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:39.764 [2024-12-07 05:41:42.727110] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1665d50, cid 0, qid 0 00:26:39.764 [2024-12-07 05:41:42.735023] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.764 [2024-12-07 05:41:42.735032] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.764 [2024-12-07 05:41:42.735036] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.764 [2024-12-07 05:41:42.735040] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1665d50) on tqpair=0x15e6db0 00:26:39.764 [2024-12-07 05:41:42.735050] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:39.765 [2024-12-07 05:41:42.735057] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:26:39.765 [2024-12-07 05:41:42.735062] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:26:39.765 [2024-12-07 05:41:42.735073] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.765 [2024-12-07 05:41:42.735077] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.765 [2024-12-07 05:41:42.735080] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15e6db0) 00:26:39.765 [2024-12-07 05:41:42.735088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.765 [2024-12-07 05:41:42.735101] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1665d50, cid 0, qid 0 00:26:39.765 [2024-12-07 05:41:42.735320] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.765 [2024-12-07 05:41:42.735327] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.765 [2024-12-07 05:41:42.735330] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.765 [2024-12-07 05:41:42.735334] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1665d50) on tqpair=0x15e6db0 00:26:39.765 [2024-12-07 05:41:42.735340] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:26:39.765 [2024-12-07 05:41:42.735347] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:26:39.765 [2024-12-07 05:41:42.735355] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.765 [2024-12-07 05:41:42.735359] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.765 [2024-12-07 05:41:42.735362] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15e6db0) 00:26:39.765 [2024-12-07 05:41:42.735369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.765 [2024-12-07 05:41:42.735379] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1665d50, cid 0, qid 0 00:26:39.765 [2024-12-07 05:41:42.735585] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.765 [2024-12-07 05:41:42.735591] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.765 [2024-12-07 05:41:42.735594] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.765 [2024-12-07 05:41:42.735598] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1665d50) on tqpair=0x15e6db0 00:26:39.765 [2024-12-07 05:41:42.735604] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:26:39.765 [2024-12-07 05:41:42.735612] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:26:39.765 [2024-12-07 05:41:42.735618] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.765 [2024-12-07 05:41:42.735622] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.765 [2024-12-07 05:41:42.735626] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15e6db0) 00:26:39.765 [2024-12-07 05:41:42.735633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.765 [2024-12-07 05:41:42.735646] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1665d50, cid 0, qid 0 00:26:39.765 [2024-12-07 05:41:42.735841] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.765 [2024-12-07 05:41:42.735847] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.765 [2024-12-07 05:41:42.735851] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.765 [2024-12-07 05:41:42.735854] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1665d50) on tqpair=0x15e6db0 00:26:39.765 [2024-12-07 05:41:42.735860] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:39.765 [2024-12-07 05:41:42.735870] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.765 [2024-12-07 05:41:42.735874] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.765 [2024-12-07 05:41:42.735878] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15e6db0) 00:26:39.765 [2024-12-07 05:41:42.735885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.765 [2024-12-07 05:41:42.735895] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1665d50, cid 0, qid 0 00:26:39.765 [2024-12-07 05:41:42.736070] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.765 [2024-12-07 05:41:42.736077] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.765 [2024-12-07 05:41:42.736081] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.765 [2024-12-07 05:41:42.736084] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1665d50) on tqpair=0x15e6db0 00:26:39.765 [2024-12-07 05:41:42.736090] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:26:39.765 [2024-12-07 05:41:42.736095] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:26:39.765 [2024-12-07 05:41:42.736102] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:39.765 [2024-12-07 05:41:42.736208] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:26:39.765 [2024-12-07 05:41:42.736212] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:39.765 [2024-12-07 05:41:42.736219] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.765 [2024-12-07 05:41:42.736223] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.765 [2024-12-07 05:41:42.736226] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15e6db0) 00:26:39.765 [2024-12-07 05:41:42.736233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.765 [2024-12-07 05:41:42.736244] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1665d50, cid 0, qid 0 00:26:39.765 [2024-12-07 05:41:42.736427] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.765 [2024-12-07 05:41:42.736433] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.765 [2024-12-07 05:41:42.736436] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.765 [2024-12-07 05:41:42.736440] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1665d50) on tqpair=0x15e6db0 00:26:39.765 [2024-12-07 05:41:42.736445] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:39.765 [2024-12-07 05:41:42.736455] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.765 [2024-12-07 05:41:42.736459] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.765 [2024-12-07 05:41:42.736462] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15e6db0) 00:26:39.765 [2024-12-07 05:41:42.736469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.765 [2024-12-07 05:41:42.736481] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1665d50, cid 0, qid 0 00:26:39.765 [2024-12-07 05:41:42.736696] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.765 [2024-12-07 05:41:42.736702] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.765 [2024-12-07 05:41:42.736705] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.765 [2024-12-07 05:41:42.736709] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1665d50) on tqpair=0x15e6db0 00:26:39.765 [2024-12-07 05:41:42.736714] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:39.765 [2024-12-07 05:41:42.736719] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:26:39.765 [2024-12-07 05:41:42.736727] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:26:39.765 [2024-12-07 05:41:42.736735] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:26:39.765 [2024-12-07 05:41:42.736743] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.765 [2024-12-07 05:41:42.736747] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.765 [2024-12-07 05:41:42.736750] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15e6db0) 00:26:39.765 [2024-12-07 05:41:42.736757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.765 [2024-12-07 05:41:42.736768] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1665d50, cid 0, qid 0 00:26:39.765 [2024-12-07 05:41:42.736983] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:39.765 [2024-12-07 05:41:42.736990] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:39.765 [2024-12-07 05:41:42.736994] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:39.765 [2024-12-07 05:41:42.736998] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15e6db0): datao=0, datal=4096, cccid=0 00:26:39.765 [2024-12-07 05:41:42.737003] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1665d50) on tqpair(0x15e6db0): expected_datao=0, payload_size=4096 00:26:39.765 [2024-12-07 05:41:42.737021] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:39.765 [2024-12-07 05:41:42.737026] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:39.765 [2024-12-07 05:41:42.782018] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.765 [2024-12-07 05:41:42.782028] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.765 [2024-12-07 05:41:42.782031] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.765 [2024-12-07 05:41:42.782036] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1665d50) on tqpair=0x15e6db0 00:26:39.765 [2024-12-07 05:41:42.782045] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:26:39.765 [2024-12-07 05:41:42.782050] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:26:39.765 [2024-12-07 05:41:42.782054] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:26:39.765 [2024-12-07 05:41:42.782058] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:26:39.765 [2024-12-07 05:41:42.782063] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:26:39.765 [2024-12-07 05:41:42.782068] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:26:39.765 [2024-12-07 05:41:42.782079] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:26:39.765 [2024-12-07 05:41:42.782088] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.765 [2024-12-07 05:41:42.782092] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.766 [2024-12-07 05:41:42.782096] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15e6db0) 00:26:39.766 [2024-12-07 05:41:42.782103] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:39.766 [2024-12-07 05:41:42.782116] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1665d50, cid 0, qid 0 00:26:39.766 [2024-12-07 05:41:42.782313] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.766 [2024-12-07 05:41:42.782319] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.766 [2024-12-07 05:41:42.782322] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.766 [2024-12-07 05:41:42.782326] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1665d50) on tqpair=0x15e6db0 00:26:39.766 [2024-12-07 05:41:42.782334] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.766 [2024-12-07 05:41:42.782338] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.766 [2024-12-07 05:41:42.782341] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15e6db0) 00:26:39.766 [2024-12-07 05:41:42.782347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.766 [2024-12-07 05:41:42.782354] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.766 [2024-12-07 05:41:42.782357] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.766 [2024-12-07 05:41:42.782361] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x15e6db0) 00:26:39.766 [2024-12-07 05:41:42.782367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.766 [2024-12-07 05:41:42.782373] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.766 [2024-12-07 05:41:42.782376] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.766 [2024-12-07 05:41:42.782380] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x15e6db0) 00:26:39.766 [2024-12-07 05:41:42.782386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.766 [2024-12-07 05:41:42.782392] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.766 [2024-12-07 05:41:42.782395] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.766 [2024-12-07 05:41:42.782399] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e6db0) 00:26:39.766 [2024-12-07 05:41:42.782404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.766 [2024-12-07 05:41:42.782409] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:39.766 [2024-12-07 05:41:42.782419] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:39.766 [2024-12-07 05:41:42.782426] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.766 [2024-12-07 05:41:42.782429] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.766 [2024-12-07 05:41:42.782433] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15e6db0) 00:26:39.766 [2024-12-07 05:41:42.782440] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.766 [2024-12-07 05:41:42.782451] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1665d50, cid 0, qid 0 00:26:39.766 [2024-12-07 05:41:42.782457] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1665eb0, cid 1, qid 0 00:26:39.766 [2024-12-07 05:41:42.782461] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1666010, cid 2, qid 0 00:26:39.766 [2024-12-07 05:41:42.782468] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1666170, cid 3, qid 0 00:26:39.766 [2024-12-07 05:41:42.782473] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16662d0, cid 4, qid 0 00:26:39.766 [2024-12-07 05:41:42.782656] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.766 [2024-12-07 05:41:42.782662] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.766 [2024-12-07 05:41:42.782666] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.766 [2024-12-07 05:41:42.782670] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16662d0) on tqpair=0x15e6db0 00:26:39.766 [2024-12-07 05:41:42.782675] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:26:39.766 [2024-12-07 05:41:42.782680] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:26:39.766 [2024-12-07 05:41:42.782688] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:26:39.766 [2024-12-07 05:41:42.782696] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:26:39.766 [2024-12-07 05:41:42.782703] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.766 [2024-12-07 05:41:42.782706] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.766 [2024-12-07 05:41:42.782710] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15e6db0) 00:26:39.766 [2024-12-07 05:41:42.782717] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:39.766 [2024-12-07 05:41:42.782727] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16662d0, cid 4, qid 0 00:26:39.766 [2024-12-07 05:41:42.782881] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.766 [2024-12-07 05:41:42.782887] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.766 [2024-12-07 05:41:42.782891] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.766 [2024-12-07 05:41:42.782895] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16662d0) on tqpair=0x15e6db0 00:26:39.766 [2024-12-07 05:41:42.782958] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:26:39.766 [2024-12-07 05:41:42.782967] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:26:39.766 [2024-12-07 05:41:42.782974] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.766 [2024-12-07 05:41:42.782978] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.766 [2024-12-07 05:41:42.782982] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15e6db0) 00:26:39.766 [2024-12-07 05:41:42.782988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.766 [2024-12-07 05:41:42.782998] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16662d0, cid 4, qid 0 00:26:39.766 [2024-12-07 05:41:42.783161] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:39.766 [2024-12-07 05:41:42.783168] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:39.766 [2024-12-07 05:41:42.783172] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:39.766 [2024-12-07 05:41:42.783176] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15e6db0): datao=0, datal=4096, cccid=4 00:26:39.766 [2024-12-07 05:41:42.783180] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16662d0) on tqpair(0x15e6db0): expected_datao=0, payload_size=4096 00:26:39.766 [2024-12-07 05:41:42.783212] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:39.766 [2024-12-07 05:41:42.783216] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:39.766 [2024-12-07 05:41:42.824149] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.766 [2024-12-07 05:41:42.824160] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.766 [2024-12-07 05:41:42.824163] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.766 [2024-12-07 05:41:42.824167] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16662d0) on tqpair=0x15e6db0 00:26:39.766 [2024-12-07 05:41:42.824181] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:26:39.766 [2024-12-07 05:41:42.824194] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:26:39.766 [2024-12-07 05:41:42.824204] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:26:39.766 [2024-12-07 05:41:42.824211] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.766 [2024-12-07 05:41:42.824215] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.766 [2024-12-07 05:41:42.824218] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15e6db0) 00:26:39.766 [2024-12-07 05:41:42.824225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.766 [2024-12-07 05:41:42.824237] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16662d0, cid 4, qid 0 00:26:39.766 [2024-12-07 05:41:42.824451] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:39.766 [2024-12-07 05:41:42.824458] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:39.766 [2024-12-07 05:41:42.824461] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:39.766 [2024-12-07 05:41:42.824465] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15e6db0): datao=0, datal=4096, cccid=4 00:26:39.766 [2024-12-07 05:41:42.824470] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16662d0) on tqpair(0x15e6db0): expected_datao=0, payload_size=4096 00:26:39.766 [2024-12-07 05:41:42.824502] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:39.766 [2024-12-07 05:41:42.824506] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:39.766 [2024-12-07 05:41:42.869021] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.766 [2024-12-07 05:41:42.869030] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.766 [2024-12-07 05:41:42.869034] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.766 [2024-12-07 05:41:42.869037] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16662d0) on tqpair=0x15e6db0 00:26:39.766 [2024-12-07 05:41:42.869053] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:26:39.766 [2024-12-07 05:41:42.869062] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:26:39.766 [2024-12-07 05:41:42.869070] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.766 [2024-12-07 05:41:42.869074] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.766 [2024-12-07 05:41:42.869077] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15e6db0) 00:26:39.766 [2024-12-07 05:41:42.869084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.766 [2024-12-07 05:41:42.869096] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16662d0, cid 4, qid 0 00:26:39.766 [2024-12-07 05:41:42.869287] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:39.766 [2024-12-07 05:41:42.869294] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:39.766 [2024-12-07 05:41:42.869297] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:39.766 [2024-12-07 05:41:42.869301] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15e6db0): datao=0, datal=4096, cccid=4 00:26:39.766 [2024-12-07 05:41:42.869309] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16662d0) on tqpair(0x15e6db0): expected_datao=0, payload_size=4096 00:26:39.767 [2024-12-07 05:41:42.869323] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:39.767 [2024-12-07 05:41:42.869327] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:39.767 [2024-12-07 05:41:42.910206] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.767 [2024-12-07 05:41:42.910216] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.767 [2024-12-07 05:41:42.910220] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.767 [2024-12-07 05:41:42.910224] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16662d0) on tqpair=0x15e6db0 00:26:39.767 [2024-12-07 05:41:42.910233] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:26:39.767 [2024-12-07 05:41:42.910241] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:26:39.767 [2024-12-07 05:41:42.910250] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:26:39.767 [2024-12-07 05:41:42.910256] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:26:39.767 [2024-12-07 05:41:42.910261] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:26:39.767 [2024-12-07 05:41:42.910266] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:26:39.767 [2024-12-07 05:41:42.910270] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:26:39.767 [2024-12-07 05:41:42.910275] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:26:39.767 [2024-12-07 05:41:42.910290] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.767 [2024-12-07 05:41:42.910294] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.767 [2024-12-07 05:41:42.910297] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15e6db0) 00:26:39.767 [2024-12-07 05:41:42.910304] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.767 [2024-12-07 05:41:42.910311] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.767 [2024-12-07 05:41:42.910315] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.767 [2024-12-07 05:41:42.910319] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15e6db0) 00:26:39.767 [2024-12-07 05:41:42.910325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.767 [2024-12-07 05:41:42.910349] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16662d0, cid 4, qid 0 00:26:39.767 [2024-12-07 05:41:42.910354] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1666430, cid 5, qid 0 00:26:39.767 [2024-12-07 05:41:42.910540] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.767 [2024-12-07 05:41:42.910547] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.767 [2024-12-07 05:41:42.910550] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.767 [2024-12-07 05:41:42.910554] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16662d0) on tqpair=0x15e6db0 00:26:39.767 [2024-12-07 05:41:42.910562] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.767 [2024-12-07 05:41:42.910568] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.767 [2024-12-07 05:41:42.910571] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.767 [2024-12-07 05:41:42.910575] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1666430) on tqpair=0x15e6db0 00:26:39.767 [2024-12-07 05:41:42.910588] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.767 [2024-12-07 05:41:42.910592] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.767 [2024-12-07 05:41:42.910595] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15e6db0) 00:26:39.767 [2024-12-07 05:41:42.910602] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.767 [2024-12-07 05:41:42.910612] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1666430, cid 5, qid 0 00:26:39.767 [2024-12-07 05:41:42.910772] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.767 [2024-12-07 05:41:42.910778] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.767 [2024-12-07 05:41:42.910782] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.767 [2024-12-07 05:41:42.910785] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1666430) on tqpair=0x15e6db0 00:26:39.767 [2024-12-07 05:41:42.910795] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.767 [2024-12-07 05:41:42.910799] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.767 [2024-12-07 05:41:42.910803] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15e6db0) 00:26:39.767 [2024-12-07 05:41:42.910809] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.767 [2024-12-07 05:41:42.910819] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1666430, cid 5, qid 0 00:26:39.767 [2024-12-07 05:41:42.911008] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.767 [2024-12-07 05:41:42.911020] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.767 [2024-12-07 05:41:42.911024] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.767 [2024-12-07 05:41:42.911028] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1666430) on tqpair=0x15e6db0 00:26:39.767 [2024-12-07 05:41:42.911037] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.767 [2024-12-07 05:41:42.911042] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.767 [2024-12-07 05:41:42.911045] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15e6db0) 00:26:39.767 [2024-12-07 05:41:42.911052] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.767 [2024-12-07 05:41:42.911062] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1666430, cid 5, qid 0 00:26:39.767 [2024-12-07 05:41:42.911276] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.767 [2024-12-07 05:41:42.911283] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.767 [2024-12-07 05:41:42.911286] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.767 [2024-12-07 05:41:42.911290] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1666430) on tqpair=0x15e6db0 00:26:39.767 [2024-12-07 05:41:42.911303] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.767 [2024-12-07 05:41:42.911307] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.767 [2024-12-07 05:41:42.911311] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15e6db0) 00:26:39.767 [2024-12-07 05:41:42.911317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.767 [2024-12-07 05:41:42.911324] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.767 [2024-12-07 05:41:42.911328] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.767 [2024-12-07 05:41:42.911332] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15e6db0) 00:26:39.767 [2024-12-07 05:41:42.911338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.767 [2024-12-07 05:41:42.911347] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.767 [2024-12-07 05:41:42.911351] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.767 [2024-12-07 05:41:42.911355] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x15e6db0) 00:26:39.767 [2024-12-07 05:41:42.911361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.767 [2024-12-07 05:41:42.911368] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.767 [2024-12-07 05:41:42.911372] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.767 [2024-12-07 05:41:42.911376] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x15e6db0) 00:26:39.767 [2024-12-07 05:41:42.911382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.767 [2024-12-07 05:41:42.911393] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1666430, cid 5, qid 0 00:26:39.767 [2024-12-07 05:41:42.911398] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16662d0, cid 4, qid 0 00:26:39.767 [2024-12-07 05:41:42.911403] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1666590, cid 6, qid 0 00:26:39.767 [2024-12-07 05:41:42.911408] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16666f0, cid 7, qid 0 00:26:39.767 [2024-12-07 05:41:42.911614] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:39.767 [2024-12-07 05:41:42.911621] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:39.767 [2024-12-07 05:41:42.911624] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:39.767 [2024-12-07 05:41:42.911628] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15e6db0): datao=0, datal=8192, cccid=5 00:26:39.767 [2024-12-07 05:41:42.911632] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1666430) on tqpair(0x15e6db0): expected_datao=0, payload_size=8192 00:26:39.767 [2024-12-07 05:41:42.911730] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:39.767 [2024-12-07 05:41:42.911734] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:39.767 [2024-12-07 05:41:42.911740] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:39.767 [2024-12-07 05:41:42.911746] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:39.767 [2024-12-07 05:41:42.911749] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:39.767 [2024-12-07 05:41:42.911753] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15e6db0): datao=0, datal=512, cccid=4 00:26:39.767 [2024-12-07 05:41:42.911757] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16662d0) on tqpair(0x15e6db0): expected_datao=0, payload_size=512 00:26:39.767 [2024-12-07 05:41:42.911764] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:39.767 [2024-12-07 05:41:42.911768] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:39.767 [2024-12-07 05:41:42.911774] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:39.767 [2024-12-07 05:41:42.911779] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:39.767 [2024-12-07 05:41:42.911783] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:39.767 [2024-12-07 05:41:42.911786] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15e6db0): datao=0, datal=512, cccid=6 00:26:39.767 [2024-12-07 05:41:42.911790] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1666590) on tqpair(0x15e6db0): expected_datao=0, payload_size=512 00:26:39.767 [2024-12-07 05:41:42.911797] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:39.767 [2024-12-07 05:41:42.911801] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:39.767 [2024-12-07 05:41:42.911807] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:39.767 [2024-12-07 05:41:42.911812] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:39.768 [2024-12-07 05:41:42.911816] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:39.768 [2024-12-07 05:41:42.911821] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15e6db0): datao=0, datal=4096, cccid=7 00:26:39.768 [2024-12-07 05:41:42.911826] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16666f0) on tqpair(0x15e6db0): expected_datao=0, payload_size=4096 00:26:39.768 [2024-12-07 05:41:42.911833] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:39.768 [2024-12-07 05:41:42.911837] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:39.768 [2024-12-07 05:41:42.911851] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.768 [2024-12-07 05:41:42.911857] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.768 [2024-12-07 05:41:42.911861] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.768 [2024-12-07 05:41:42.911864] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1666430) on tqpair=0x15e6db0 00:26:39.768 [2024-12-07 05:41:42.911878] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.768 [2024-12-07 05:41:42.911884] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.768 [2024-12-07 05:41:42.911888] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.768 [2024-12-07 05:41:42.911892] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16662d0) on tqpair=0x15e6db0 00:26:39.768 [2024-12-07 05:41:42.911902] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.768 [2024-12-07 05:41:42.911908] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.768 [2024-12-07 05:41:42.911911] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.768 [2024-12-07 05:41:42.911915] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1666590) on tqpair=0x15e6db0 00:26:39.768 [2024-12-07 05:41:42.911923] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.768 [2024-12-07 05:41:42.911929] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.768 [2024-12-07 05:41:42.911933] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.768 [2024-12-07 05:41:42.911936] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16666f0) on tqpair=0x15e6db0 00:26:39.768 ===================================================== 00:26:39.768 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:39.768 ===================================================== 00:26:39.768 Controller Capabilities/Features 00:26:39.768 ================================ 00:26:39.768 Vendor ID: 8086 00:26:39.768 Subsystem Vendor ID: 8086 00:26:39.768 Serial Number: SPDK00000000000001 00:26:39.768 Model Number: SPDK bdev Controller 00:26:39.768 Firmware Version: 24.01.1 00:26:39.768 Recommended Arb Burst: 6 00:26:39.768 IEEE OUI Identifier: e4 d2 5c 00:26:39.768 Multi-path I/O 00:26:39.768 May have multiple subsystem ports: Yes 00:26:39.768 May have multiple controllers: Yes 00:26:39.768 Associated with SR-IOV VF: No 00:26:39.768 Max Data Transfer Size: 131072 00:26:39.768 Max Number of Namespaces: 32 00:26:39.768 Max Number of I/O Queues: 127 00:26:39.768 NVMe Specification Version (VS): 1.3 00:26:39.768 NVMe Specification Version (Identify): 1.3 00:26:39.768 Maximum Queue Entries: 128 00:26:39.768 Contiguous Queues Required: Yes 00:26:39.768 Arbitration Mechanisms Supported 00:26:39.768 Weighted Round Robin: Not Supported 00:26:39.768 Vendor Specific: Not Supported 00:26:39.768 Reset Timeout: 15000 ms 00:26:39.768 Doorbell Stride: 4 bytes 00:26:39.768 NVM Subsystem Reset: Not Supported 00:26:39.768 Command Sets Supported 00:26:39.768 NVM Command Set: Supported 00:26:39.768 Boot Partition: Not Supported 00:26:39.768 Memory Page Size Minimum: 4096 bytes 00:26:39.768 Memory Page Size Maximum: 4096 bytes 00:26:39.768 Persistent Memory Region: Not Supported 00:26:39.768 Optional Asynchronous Events Supported 00:26:39.768 Namespace Attribute Notices: Supported 00:26:39.768 Firmware Activation Notices: Not Supported 00:26:39.768 ANA Change Notices: Not Supported 00:26:39.768 PLE Aggregate Log Change Notices: Not Supported 00:26:39.768 LBA Status Info Alert Notices: Not Supported 00:26:39.768 EGE Aggregate Log Change Notices: Not Supported 00:26:39.768 Normal NVM Subsystem Shutdown event: Not Supported 00:26:39.768 Zone Descriptor Change Notices: Not Supported 00:26:39.768 Discovery Log Change Notices: Not Supported 00:26:39.768 Controller Attributes 00:26:39.768 128-bit Host Identifier: Supported 00:26:39.768 Non-Operational Permissive Mode: Not Supported 00:26:39.768 NVM Sets: Not Supported 00:26:39.768 Read Recovery Levels: Not Supported 00:26:39.768 Endurance Groups: Not Supported 00:26:39.768 Predictable Latency Mode: Not Supported 00:26:39.768 Traffic Based Keep ALive: Not Supported 00:26:39.768 Namespace Granularity: Not Supported 00:26:39.768 SQ Associations: Not Supported 00:26:39.768 UUID List: Not Supported 00:26:39.768 Multi-Domain Subsystem: Not Supported 00:26:39.768 Fixed Capacity Management: Not Supported 00:26:39.768 Variable Capacity Management: Not Supported 00:26:39.768 Delete Endurance Group: Not Supported 00:26:39.768 Delete NVM Set: Not Supported 00:26:39.768 Extended LBA Formats Supported: Not Supported 00:26:39.768 Flexible Data Placement Supported: Not Supported 00:26:39.768 00:26:39.768 Controller Memory Buffer Support 00:26:39.768 ================================ 00:26:39.768 Supported: No 00:26:39.768 00:26:39.768 Persistent Memory Region Support 00:26:39.768 ================================ 00:26:39.768 Supported: No 00:26:39.768 00:26:39.768 Admin Command Set Attributes 00:26:39.768 ============================ 00:26:39.768 Security Send/Receive: Not Supported 00:26:39.768 Format NVM: Not Supported 00:26:39.768 Firmware Activate/Download: Not Supported 00:26:39.768 Namespace Management: Not Supported 00:26:39.768 Device Self-Test: Not Supported 00:26:39.768 Directives: Not Supported 00:26:39.768 NVMe-MI: Not Supported 00:26:39.768 Virtualization Management: Not Supported 00:26:39.768 Doorbell Buffer Config: Not Supported 00:26:39.768 Get LBA Status Capability: Not Supported 00:26:39.768 Command & Feature Lockdown Capability: Not Supported 00:26:39.768 Abort Command Limit: 4 00:26:39.768 Async Event Request Limit: 4 00:26:39.768 Number of Firmware Slots: N/A 00:26:39.768 Firmware Slot 1 Read-Only: N/A 00:26:39.768 Firmware Activation Without Reset: N/A 00:26:39.768 Multiple Update Detection Support: N/A 00:26:39.768 Firmware Update Granularity: No Information Provided 00:26:39.768 Per-Namespace SMART Log: No 00:26:39.768 Asymmetric Namespace Access Log Page: Not Supported 00:26:39.768 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:26:39.768 Command Effects Log Page: Supported 00:26:39.768 Get Log Page Extended Data: Supported 00:26:39.768 Telemetry Log Pages: Not Supported 00:26:39.768 Persistent Event Log Pages: Not Supported 00:26:39.768 Supported Log Pages Log Page: May Support 00:26:39.768 Commands Supported & Effects Log Page: Not Supported 00:26:39.768 Feature Identifiers & Effects Log Page:May Support 00:26:39.768 NVMe-MI Commands & Effects Log Page: May Support 00:26:39.768 Data Area 4 for Telemetry Log: Not Supported 00:26:39.768 Error Log Page Entries Supported: 128 00:26:39.768 Keep Alive: Supported 00:26:39.768 Keep Alive Granularity: 10000 ms 00:26:39.768 00:26:39.768 NVM Command Set Attributes 00:26:39.768 ========================== 00:26:39.768 Submission Queue Entry Size 00:26:39.768 Max: 64 00:26:39.768 Min: 64 00:26:39.768 Completion Queue Entry Size 00:26:39.768 Max: 16 00:26:39.768 Min: 16 00:26:39.768 Number of Namespaces: 32 00:26:39.768 Compare Command: Supported 00:26:39.768 Write Uncorrectable Command: Not Supported 00:26:39.768 Dataset Management Command: Supported 00:26:39.768 Write Zeroes Command: Supported 00:26:39.768 Set Features Save Field: Not Supported 00:26:39.768 Reservations: Supported 00:26:39.768 Timestamp: Not Supported 00:26:39.768 Copy: Supported 00:26:39.768 Volatile Write Cache: Present 00:26:39.768 Atomic Write Unit (Normal): 1 00:26:39.768 Atomic Write Unit (PFail): 1 00:26:39.768 Atomic Compare & Write Unit: 1 00:26:39.768 Fused Compare & Write: Supported 00:26:39.768 Scatter-Gather List 00:26:39.768 SGL Command Set: Supported 00:26:39.768 SGL Keyed: Supported 00:26:39.768 SGL Bit Bucket Descriptor: Not Supported 00:26:39.768 SGL Metadata Pointer: Not Supported 00:26:39.768 Oversized SGL: Not Supported 00:26:39.768 SGL Metadata Address: Not Supported 00:26:39.768 SGL Offset: Supported 00:26:39.768 Transport SGL Data Block: Not Supported 00:26:39.768 Replay Protected Memory Block: Not Supported 00:26:39.768 00:26:39.768 Firmware Slot Information 00:26:39.768 ========================= 00:26:39.768 Active slot: 1 00:26:39.768 Slot 1 Firmware Revision: 24.01.1 00:26:39.768 00:26:39.768 00:26:39.768 Commands Supported and Effects 00:26:39.768 ============================== 00:26:39.768 Admin Commands 00:26:39.768 -------------- 00:26:39.768 Get Log Page (02h): Supported 00:26:39.768 Identify (06h): Supported 00:26:39.768 Abort (08h): Supported 00:26:39.768 Set Features (09h): Supported 00:26:39.768 Get Features (0Ah): Supported 00:26:39.768 Asynchronous Event Request (0Ch): Supported 00:26:39.768 Keep Alive (18h): Supported 00:26:39.768 I/O Commands 00:26:39.768 ------------ 00:26:39.768 Flush (00h): Supported LBA-Change 00:26:39.769 Write (01h): Supported LBA-Change 00:26:39.769 Read (02h): Supported 00:26:39.769 Compare (05h): Supported 00:26:39.769 Write Zeroes (08h): Supported LBA-Change 00:26:39.769 Dataset Management (09h): Supported LBA-Change 00:26:39.769 Copy (19h): Supported LBA-Change 00:26:39.769 Unknown (79h): Supported LBA-Change 00:26:39.769 Unknown (7Ah): Supported 00:26:39.769 00:26:39.769 Error Log 00:26:39.769 ========= 00:26:39.769 00:26:39.769 Arbitration 00:26:39.769 =========== 00:26:39.769 Arbitration Burst: 1 00:26:39.769 00:26:39.769 Power Management 00:26:39.769 ================ 00:26:39.769 Number of Power States: 1 00:26:39.769 Current Power State: Power State #0 00:26:39.769 Power State #0: 00:26:39.769 Max Power: 0.00 W 00:26:39.769 Non-Operational State: Operational 00:26:39.769 Entry Latency: Not Reported 00:26:39.769 Exit Latency: Not Reported 00:26:39.769 Relative Read Throughput: 0 00:26:39.769 Relative Read Latency: 0 00:26:39.769 Relative Write Throughput: 0 00:26:39.769 Relative Write Latency: 0 00:26:39.769 Idle Power: Not Reported 00:26:39.769 Active Power: Not Reported 00:26:39.769 Non-Operational Permissive Mode: Not Supported 00:26:39.769 00:26:39.769 Health Information 00:26:39.769 ================== 00:26:39.769 Critical Warnings: 00:26:39.769 Available Spare Space: OK 00:26:39.769 Temperature: OK 00:26:39.769 Device Reliability: OK 00:26:39.769 Read Only: No 00:26:39.769 Volatile Memory Backup: OK 00:26:39.769 Current Temperature: 0 Kelvin (-273 Celsius) 00:26:39.769 Temperature Threshold: [2024-12-07 05:41:42.912043] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.769 [2024-12-07 05:41:42.912049] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.769 [2024-12-07 05:41:42.912053] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x15e6db0) 00:26:39.769 [2024-12-07 05:41:42.912059] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.769 [2024-12-07 05:41:42.912070] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16666f0, cid 7, qid 0 00:26:39.769 [2024-12-07 05:41:42.912235] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.769 [2024-12-07 05:41:42.912242] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.769 [2024-12-07 05:41:42.912245] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.769 [2024-12-07 05:41:42.912249] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16666f0) on tqpair=0x15e6db0 00:26:39.769 [2024-12-07 05:41:42.912277] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:26:39.769 [2024-12-07 05:41:42.912288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.769 [2024-12-07 05:41:42.912295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.769 [2024-12-07 05:41:42.912301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.769 [2024-12-07 05:41:42.912307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.769 [2024-12-07 05:41:42.912315] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.769 [2024-12-07 05:41:42.912319] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.769 [2024-12-07 05:41:42.912325] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e6db0) 00:26:39.769 [2024-12-07 05:41:42.912332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.769 [2024-12-07 05:41:42.912343] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1666170, cid 3, qid 0 00:26:39.769 [2024-12-07 05:41:42.912516] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.769 [2024-12-07 05:41:42.912522] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.769 [2024-12-07 05:41:42.912526] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.769 [2024-12-07 05:41:42.912529] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1666170) on tqpair=0x15e6db0 00:26:39.769 [2024-12-07 05:41:42.912537] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.769 [2024-12-07 05:41:42.912541] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.769 [2024-12-07 05:41:42.912545] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e6db0) 00:26:39.769 [2024-12-07 05:41:42.912551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.769 [2024-12-07 05:41:42.912564] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1666170, cid 3, qid 0 00:26:39.769 [2024-12-07 05:41:42.912765] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.769 [2024-12-07 05:41:42.912771] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.769 [2024-12-07 05:41:42.912775] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.769 [2024-12-07 05:41:42.912778] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1666170) on tqpair=0x15e6db0 00:26:39.769 [2024-12-07 05:41:42.912784] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:26:39.769 [2024-12-07 05:41:42.912789] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:26:39.769 [2024-12-07 05:41:42.912798] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.769 [2024-12-07 05:41:42.912802] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.769 [2024-12-07 05:41:42.912806] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e6db0) 00:26:39.769 [2024-12-07 05:41:42.912812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.769 [2024-12-07 05:41:42.912822] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1666170, cid 3, qid 0 00:26:39.769 [2024-12-07 05:41:42.912976] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.769 [2024-12-07 05:41:42.912982] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.769 [2024-12-07 05:41:42.912986] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.769 [2024-12-07 05:41:42.912989] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1666170) on tqpair=0x15e6db0 00:26:39.769 [2024-12-07 05:41:42.913000] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.769 [2024-12-07 05:41:42.913004] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.769 [2024-12-07 05:41:42.913008] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e6db0) 00:26:39.769 [2024-12-07 05:41:42.917023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.769 [2024-12-07 05:41:42.917036] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1666170, cid 3, qid 0 00:26:39.769 [2024-12-07 05:41:42.917225] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.769 [2024-12-07 05:41:42.917232] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.769 [2024-12-07 05:41:42.917236] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.769 [2024-12-07 05:41:42.917239] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1666170) on tqpair=0x15e6db0 00:26:39.769 [2024-12-07 05:41:42.917250] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:26:39.769 0 Kelvin (-273 Celsius) 00:26:39.769 Available Spare: 0% 00:26:39.769 Available Spare Threshold: 0% 00:26:39.769 Life Percentage Used: 0% 00:26:39.769 Data Units Read: 0 00:26:39.769 Data Units Written: 0 00:26:39.769 Host Read Commands: 0 00:26:39.769 Host Write Commands: 0 00:26:39.769 Controller Busy Time: 0 minutes 00:26:39.769 Power Cycles: 0 00:26:39.769 Power On Hours: 0 hours 00:26:39.769 Unsafe Shutdowns: 0 00:26:39.769 Unrecoverable Media Errors: 0 00:26:39.769 Lifetime Error Log Entries: 0 00:26:39.769 Warning Temperature Time: 0 minutes 00:26:39.769 Critical Temperature Time: 0 minutes 00:26:39.769 00:26:39.769 Number of Queues 00:26:39.769 ================ 00:26:39.769 Number of I/O Submission Queues: 127 00:26:39.769 Number of I/O Completion Queues: 127 00:26:39.769 00:26:39.769 Active Namespaces 00:26:39.769 ================= 00:26:39.769 Namespace ID:1 00:26:39.769 Error Recovery Timeout: Unlimited 00:26:39.769 Command Set Identifier: NVM (00h) 00:26:39.769 Deallocate: Supported 00:26:39.770 Deallocated/Unwritten Error: Not Supported 00:26:39.770 Deallocated Read Value: Unknown 00:26:39.770 Deallocate in Write Zeroes: Not Supported 00:26:39.770 Deallocated Guard Field: 0xFFFF 00:26:39.770 Flush: Supported 00:26:39.770 Reservation: Supported 00:26:39.770 Namespace Sharing Capabilities: Multiple Controllers 00:26:39.770 Size (in LBAs): 131072 (0GiB) 00:26:39.770 Capacity (in LBAs): 131072 (0GiB) 00:26:39.770 Utilization (in LBAs): 131072 (0GiB) 00:26:39.770 NGUID: ABCDEF0123456789ABCDEF0123456789 00:26:39.770 EUI64: ABCDEF0123456789 00:26:39.770 UUID: 1da5dfcb-352b-4489-9084-02ce24d38b03 00:26:39.770 Thin Provisioning: Not Supported 00:26:39.770 Per-NS Atomic Units: Yes 00:26:39.770 Atomic Boundary Size (Normal): 0 00:26:39.770 Atomic Boundary Size (PFail): 0 00:26:39.770 Atomic Boundary Offset: 0 00:26:39.770 Maximum Single Source Range Length: 65535 00:26:39.770 Maximum Copy Length: 65535 00:26:39.770 Maximum Source Range Count: 1 00:26:39.770 NGUID/EUI64 Never Reused: No 00:26:39.770 Namespace Write Protected: No 00:26:39.770 Number of LBA Formats: 1 00:26:39.770 Current LBA Format: LBA Format #00 00:26:39.770 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:39.770 00:26:39.770 05:41:42 -- host/identify.sh@51 -- # sync 00:26:39.770 05:41:42 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:39.770 05:41:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.770 05:41:42 -- common/autotest_common.sh@10 -- # set +x 00:26:39.770 05:41:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.770 05:41:42 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:26:39.770 05:41:42 -- host/identify.sh@56 -- # nvmftestfini 00:26:39.770 05:41:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:39.770 05:41:42 -- nvmf/common.sh@116 -- # sync 00:26:39.770 05:41:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:39.770 05:41:42 -- nvmf/common.sh@119 -- # set +e 00:26:39.770 05:41:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:39.770 05:41:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:39.770 rmmod nvme_tcp 00:26:39.770 rmmod nvme_fabrics 00:26:39.770 rmmod nvme_keyring 00:26:40.031 05:41:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:40.031 05:41:43 -- nvmf/common.sh@123 -- # set -e 00:26:40.031 05:41:43 -- nvmf/common.sh@124 -- # return 0 00:26:40.031 05:41:43 -- nvmf/common.sh@477 -- # '[' -n 1956859 ']' 00:26:40.031 05:41:43 -- nvmf/common.sh@478 -- # killprocess 1956859 00:26:40.031 05:41:43 -- common/autotest_common.sh@936 -- # '[' -z 1956859 ']' 00:26:40.031 05:41:43 -- common/autotest_common.sh@940 -- # kill -0 1956859 00:26:40.031 05:41:43 -- common/autotest_common.sh@941 -- # uname 00:26:40.031 05:41:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:40.031 05:41:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1956859 00:26:40.031 05:41:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:40.031 05:41:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:40.031 05:41:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1956859' 00:26:40.032 killing process with pid 1956859 00:26:40.032 05:41:43 -- common/autotest_common.sh@955 -- # kill 1956859 00:26:40.032 [2024-12-07 05:41:43.080508] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:40.032 05:41:43 -- common/autotest_common.sh@960 -- # wait 1956859 00:26:40.032 05:41:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:40.032 05:41:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:40.032 05:41:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:40.032 05:41:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:40.032 05:41:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:40.032 05:41:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.032 05:41:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:40.032 05:41:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.586 05:41:45 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:42.586 00:26:42.586 real 0m11.556s 00:26:42.586 user 0m8.724s 00:26:42.586 sys 0m5.955s 00:26:42.586 05:41:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:42.586 05:41:45 -- common/autotest_common.sh@10 -- # set +x 00:26:42.586 ************************************ 00:26:42.586 END TEST nvmf_identify 00:26:42.586 ************************************ 00:26:42.586 05:41:45 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:42.586 05:41:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:42.586 05:41:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:42.586 05:41:45 -- common/autotest_common.sh@10 -- # set +x 00:26:42.586 ************************************ 00:26:42.586 START TEST nvmf_perf 00:26:42.586 ************************************ 00:26:42.586 05:41:45 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:42.586 * Looking for test storage... 00:26:42.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:42.586 05:41:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:42.586 05:41:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:42.586 05:41:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:42.586 05:41:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:42.586 05:41:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:42.586 05:41:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:42.586 05:41:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:42.586 05:41:45 -- scripts/common.sh@335 -- # IFS=.-: 00:26:42.586 05:41:45 -- scripts/common.sh@335 -- # read -ra ver1 00:26:42.586 05:41:45 -- scripts/common.sh@336 -- # IFS=.-: 00:26:42.586 05:41:45 -- scripts/common.sh@336 -- # read -ra ver2 00:26:42.586 05:41:45 -- scripts/common.sh@337 -- # local 'op=<' 00:26:42.586 05:41:45 -- scripts/common.sh@339 -- # ver1_l=2 00:26:42.586 05:41:45 -- scripts/common.sh@340 -- # ver2_l=1 00:26:42.586 05:41:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:42.586 05:41:45 -- scripts/common.sh@343 -- # case "$op" in 00:26:42.586 05:41:45 -- scripts/common.sh@344 -- # : 1 00:26:42.586 05:41:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:42.586 05:41:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:42.586 05:41:45 -- scripts/common.sh@364 -- # decimal 1 00:26:42.586 05:41:45 -- scripts/common.sh@352 -- # local d=1 00:26:42.586 05:41:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:42.586 05:41:45 -- scripts/common.sh@354 -- # echo 1 00:26:42.586 05:41:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:42.586 05:41:45 -- scripts/common.sh@365 -- # decimal 2 00:26:42.586 05:41:45 -- scripts/common.sh@352 -- # local d=2 00:26:42.586 05:41:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:42.586 05:41:45 -- scripts/common.sh@354 -- # echo 2 00:26:42.586 05:41:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:42.586 05:41:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:42.586 05:41:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:42.586 05:41:45 -- scripts/common.sh@367 -- # return 0 00:26:42.586 05:41:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:42.586 05:41:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:42.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.586 --rc genhtml_branch_coverage=1 00:26:42.586 --rc genhtml_function_coverage=1 00:26:42.586 --rc genhtml_legend=1 00:26:42.586 --rc geninfo_all_blocks=1 00:26:42.586 --rc geninfo_unexecuted_blocks=1 00:26:42.587 00:26:42.587 ' 00:26:42.587 05:41:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:42.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.587 --rc genhtml_branch_coverage=1 00:26:42.587 --rc genhtml_function_coverage=1 00:26:42.587 --rc genhtml_legend=1 00:26:42.587 --rc geninfo_all_blocks=1 00:26:42.587 --rc geninfo_unexecuted_blocks=1 00:26:42.587 00:26:42.587 ' 00:26:42.587 05:41:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:42.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.587 --rc genhtml_branch_coverage=1 00:26:42.587 --rc genhtml_function_coverage=1 00:26:42.587 --rc genhtml_legend=1 00:26:42.587 --rc geninfo_all_blocks=1 00:26:42.587 --rc geninfo_unexecuted_blocks=1 00:26:42.587 00:26:42.587 ' 00:26:42.587 05:41:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:42.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.587 --rc genhtml_branch_coverage=1 00:26:42.587 --rc genhtml_function_coverage=1 00:26:42.587 --rc genhtml_legend=1 00:26:42.587 --rc geninfo_all_blocks=1 00:26:42.587 --rc geninfo_unexecuted_blocks=1 00:26:42.587 00:26:42.587 ' 00:26:42.587 05:41:45 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:42.587 05:41:45 -- nvmf/common.sh@7 -- # uname -s 00:26:42.587 05:41:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:42.587 05:41:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:42.587 05:41:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:42.587 05:41:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:42.587 05:41:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:42.587 05:41:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:42.587 05:41:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:42.587 05:41:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:42.587 05:41:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:42.587 05:41:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:42.587 05:41:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:42.587 05:41:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:42.587 05:41:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:42.587 05:41:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:42.587 05:41:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:42.587 05:41:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:42.587 05:41:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:42.587 05:41:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:42.587 05:41:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:42.587 05:41:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.587 05:41:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.587 05:41:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.587 05:41:45 -- paths/export.sh@5 -- # export PATH 00:26:42.587 05:41:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.587 05:41:45 -- nvmf/common.sh@46 -- # : 0 00:26:42.587 05:41:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:42.587 05:41:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:42.587 05:41:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:42.587 05:41:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:42.587 05:41:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:42.587 05:41:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:42.587 05:41:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:42.587 05:41:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:42.587 05:41:45 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:42.587 05:41:45 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:42.587 05:41:45 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:42.587 05:41:45 -- host/perf.sh@17 -- # nvmftestinit 00:26:42.587 05:41:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:42.587 05:41:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:42.587 05:41:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:42.587 05:41:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:42.587 05:41:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:42.587 05:41:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.587 05:41:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:42.587 05:41:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.587 05:41:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:42.587 05:41:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:42.587 05:41:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:42.587 05:41:45 -- common/autotest_common.sh@10 -- # set +x 00:26:50.728 05:41:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:50.728 05:41:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:50.728 05:41:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:50.728 05:41:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:50.728 05:41:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:50.728 05:41:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:50.728 05:41:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:50.728 05:41:52 -- nvmf/common.sh@294 -- # net_devs=() 00:26:50.728 05:41:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:50.728 05:41:52 -- nvmf/common.sh@295 -- # e810=() 00:26:50.728 05:41:52 -- nvmf/common.sh@295 -- # local -ga e810 00:26:50.728 05:41:52 -- nvmf/common.sh@296 -- # x722=() 00:26:50.728 05:41:52 -- nvmf/common.sh@296 -- # local -ga x722 00:26:50.728 05:41:52 -- nvmf/common.sh@297 -- # mlx=() 00:26:50.728 05:41:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:50.728 05:41:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:50.728 05:41:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:50.728 05:41:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:50.728 05:41:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:50.728 05:41:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:50.728 05:41:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:50.728 05:41:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:50.728 05:41:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:50.728 05:41:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:50.728 05:41:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:50.729 05:41:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:50.729 05:41:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:50.729 05:41:52 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:50.729 05:41:52 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:50.729 05:41:52 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:50.729 05:41:52 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:50.729 05:41:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:50.729 05:41:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:50.729 05:41:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:50.729 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:50.729 05:41:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:50.729 05:41:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:50.729 05:41:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.729 05:41:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.729 05:41:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:50.729 05:41:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:50.729 05:41:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:50.729 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:50.729 05:41:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:50.729 05:41:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:50.729 05:41:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.729 05:41:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.729 05:41:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:50.729 05:41:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:50.729 05:41:52 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:50.729 05:41:52 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:50.729 05:41:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:50.729 05:41:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.729 05:41:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:50.729 05:41:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.729 05:41:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:50.729 Found net devices under 0000:31:00.0: cvl_0_0 00:26:50.729 05:41:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.729 05:41:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:50.729 05:41:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.729 05:41:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:50.729 05:41:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.729 05:41:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:50.729 Found net devices under 0000:31:00.1: cvl_0_1 00:26:50.729 05:41:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.729 05:41:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:50.729 05:41:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:50.729 05:41:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:50.729 05:41:52 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:50.729 05:41:52 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:50.729 05:41:52 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:50.729 05:41:52 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:50.729 05:41:52 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:50.729 05:41:52 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:50.729 05:41:52 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:50.729 05:41:52 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:50.729 05:41:52 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:50.729 05:41:52 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:50.729 05:41:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:50.729 05:41:52 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:50.729 05:41:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:50.729 05:41:52 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:50.729 05:41:52 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:50.729 05:41:52 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:50.729 05:41:52 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:50.729 05:41:52 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:50.729 05:41:52 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:50.729 05:41:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:50.729 05:41:53 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:50.729 05:41:53 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:50.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:50.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:26:50.729 00:26:50.729 --- 10.0.0.2 ping statistics --- 00:26:50.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.729 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:26:50.729 05:41:53 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:50.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:50.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:26:50.729 00:26:50.729 --- 10.0.0.1 ping statistics --- 00:26:50.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.729 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:26:50.729 05:41:53 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:50.729 05:41:53 -- nvmf/common.sh@410 -- # return 0 00:26:50.729 05:41:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:50.729 05:41:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:50.729 05:41:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:50.729 05:41:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:50.729 05:41:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:50.729 05:41:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:50.729 05:41:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:50.729 05:41:53 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:26:50.729 05:41:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:50.729 05:41:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:50.729 05:41:53 -- common/autotest_common.sh@10 -- # set +x 00:26:50.729 05:41:53 -- nvmf/common.sh@469 -- # nvmfpid=1961346 00:26:50.729 05:41:53 -- nvmf/common.sh@470 -- # waitforlisten 1961346 00:26:50.729 05:41:53 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:50.729 05:41:53 -- common/autotest_common.sh@829 -- # '[' -z 1961346 ']' 00:26:50.729 05:41:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.729 05:41:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:50.729 05:41:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.729 05:41:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:50.729 05:41:53 -- common/autotest_common.sh@10 -- # set +x 00:26:50.729 [2024-12-07 05:41:53.144212] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:50.729 [2024-12-07 05:41:53.144277] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:50.729 EAL: No free 2048 kB hugepages reported on node 1 00:26:50.729 [2024-12-07 05:41:53.218489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:50.729 [2024-12-07 05:41:53.291411] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:50.729 [2024-12-07 05:41:53.291544] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:50.729 [2024-12-07 05:41:53.291554] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:50.729 [2024-12-07 05:41:53.291563] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:50.730 [2024-12-07 05:41:53.291734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.730 [2024-12-07 05:41:53.291849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:50.730 [2024-12-07 05:41:53.292007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.730 [2024-12-07 05:41:53.292008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:50.730 05:41:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:50.730 05:41:53 -- common/autotest_common.sh@862 -- # return 0 00:26:50.730 05:41:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:50.730 05:41:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:50.730 05:41:53 -- common/autotest_common.sh@10 -- # set +x 00:26:50.989 05:41:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:50.989 05:41:53 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:50.989 05:41:53 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:26:51.250 05:41:54 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:26:51.250 05:41:54 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:26:51.510 05:41:54 -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:26:51.510 05:41:54 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:51.770 05:41:54 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:26:51.770 05:41:54 -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:26:51.770 05:41:54 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:26:51.770 05:41:54 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:26:51.770 05:41:54 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:51.770 [2024-12-07 05:41:54.967260] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:51.770 05:41:55 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:52.036 05:41:55 -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:52.036 05:41:55 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:52.294 05:41:55 -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:52.294 05:41:55 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:52.294 05:41:55 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:52.554 [2024-12-07 05:41:55.665997] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:52.554 05:41:55 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:52.814 05:41:55 -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:26:52.814 05:41:55 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:26:52.814 05:41:55 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:26:52.814 05:41:55 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:26:54.198 Initializing NVMe Controllers 00:26:54.198 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:26:54.198 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:26:54.198 Initialization complete. Launching workers. 00:26:54.198 ======================================================== 00:26:54.198 Latency(us) 00:26:54.198 Device Information : IOPS MiB/s Average min max 00:26:54.198 PCIE (0000:65:00.0) NSID 1 from core 0: 81005.22 316.43 394.59 13.12 4626.39 00:26:54.198 ======================================================== 00:26:54.198 Total : 81005.22 316.43 394.59 13.12 4626.39 00:26:54.198 00:26:54.198 05:41:57 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:54.198 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.585 Initializing NVMe Controllers 00:26:55.585 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:55.585 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:55.585 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:55.585 Initialization complete. Launching workers. 00:26:55.585 ======================================================== 00:26:55.585 Latency(us) 00:26:55.585 Device Information : IOPS MiB/s Average min max 00:26:55.585 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 109.00 0.43 9180.19 222.17 46546.10 00:26:55.585 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 16648.86 7957.53 51878.35 00:26:55.585 ======================================================== 00:26:55.585 Total : 170.00 0.66 11860.13 222.17 51878.35 00:26:55.585 00:26:55.585 05:41:58 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:55.585 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.969 Initializing NVMe Controllers 00:26:56.969 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:56.969 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:56.969 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:56.969 Initialization complete. Launching workers. 00:26:56.969 ======================================================== 00:26:56.969 Latency(us) 00:26:56.969 Device Information : IOPS MiB/s Average min max 00:26:56.969 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10640.16 41.56 3008.32 547.26 6573.82 00:26:56.969 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3854.70 15.06 8346.61 7147.66 16080.88 00:26:56.969 ======================================================== 00:26:56.969 Total : 14494.85 56.62 4427.96 547.26 16080.88 00:26:56.969 00:26:56.969 05:41:59 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:26:56.969 05:41:59 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:26:56.969 05:41:59 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:56.969 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.513 Initializing NVMe Controllers 00:26:59.513 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:59.513 Controller IO queue size 128, less than required. 00:26:59.513 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:59.513 Controller IO queue size 128, less than required. 00:26:59.513 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:59.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:59.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:59.513 Initialization complete. Launching workers. 00:26:59.513 ======================================================== 00:26:59.513 Latency(us) 00:26:59.513 Device Information : IOPS MiB/s Average min max 00:26:59.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1998.13 499.53 64592.67 38810.11 100585.81 00:26:59.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 571.82 142.96 234001.43 63064.26 376643.11 00:26:59.513 ======================================================== 00:26:59.513 Total : 2569.96 642.49 102286.61 38810.11 376643.11 00:26:59.513 00:26:59.513 05:42:02 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:26:59.513 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.513 No valid NVMe controllers or AIO or URING devices found 00:26:59.513 Initializing NVMe Controllers 00:26:59.513 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:59.513 Controller IO queue size 128, less than required. 00:26:59.513 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:59.513 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:59.513 Controller IO queue size 128, less than required. 00:26:59.513 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:59.513 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:26:59.513 WARNING: Some requested NVMe devices were skipped 00:26:59.513 05:42:02 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:26:59.513 EAL: No free 2048 kB hugepages reported on node 1 00:27:02.054 Initializing NVMe Controllers 00:27:02.054 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:02.054 Controller IO queue size 128, less than required. 00:27:02.054 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:02.054 Controller IO queue size 128, less than required. 00:27:02.054 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:02.054 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:02.054 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:02.054 Initialization complete. Launching workers. 00:27:02.054 00:27:02.054 ==================== 00:27:02.054 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:27:02.054 TCP transport: 00:27:02.054 polls: 17417 00:27:02.054 idle_polls: 9566 00:27:02.054 sock_completions: 7851 00:27:02.054 nvme_completions: 7019 00:27:02.054 submitted_requests: 10733 00:27:02.054 queued_requests: 1 00:27:02.054 00:27:02.054 ==================== 00:27:02.054 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:27:02.054 TCP transport: 00:27:02.054 polls: 17529 00:27:02.054 idle_polls: 9514 00:27:02.054 sock_completions: 8015 00:27:02.054 nvme_completions: 6591 00:27:02.054 submitted_requests: 10157 00:27:02.054 queued_requests: 1 00:27:02.054 ======================================================== 00:27:02.054 Latency(us) 00:27:02.054 Device Information : IOPS MiB/s Average min max 00:27:02.054 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1814.57 453.64 72169.17 41323.08 127900.42 00:27:02.054 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1707.80 426.95 76025.35 33159.31 156670.23 00:27:02.054 ======================================================== 00:27:02.054 Total : 3522.38 880.59 74038.82 33159.31 156670.23 00:27:02.054 00:27:02.054 05:42:04 -- host/perf.sh@66 -- # sync 00:27:02.054 05:42:04 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:02.054 05:42:04 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:27:02.054 05:42:04 -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:27:02.054 05:42:04 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:27:02.994 05:42:05 -- host/perf.sh@72 -- # ls_guid=fcfda0ed-0792-452c-8b9c-c24127eb1e73 00:27:02.994 05:42:05 -- host/perf.sh@73 -- # get_lvs_free_mb fcfda0ed-0792-452c-8b9c-c24127eb1e73 00:27:02.994 05:42:05 -- common/autotest_common.sh@1353 -- # local lvs_uuid=fcfda0ed-0792-452c-8b9c-c24127eb1e73 00:27:02.994 05:42:05 -- common/autotest_common.sh@1354 -- # local lvs_info 00:27:02.994 05:42:05 -- common/autotest_common.sh@1355 -- # local fc 00:27:02.994 05:42:05 -- common/autotest_common.sh@1356 -- # local cs 00:27:02.994 05:42:05 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:02.994 05:42:06 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:27:02.994 { 00:27:02.994 "uuid": "fcfda0ed-0792-452c-8b9c-c24127eb1e73", 00:27:02.994 "name": "lvs_0", 00:27:02.994 "base_bdev": "Nvme0n1", 00:27:02.994 "total_data_clusters": 457407, 00:27:02.994 "free_clusters": 457407, 00:27:02.994 "block_size": 512, 00:27:02.994 "cluster_size": 4194304 00:27:02.994 } 00:27:02.994 ]' 00:27:02.994 05:42:06 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="fcfda0ed-0792-452c-8b9c-c24127eb1e73") .free_clusters' 00:27:02.994 05:42:06 -- common/autotest_common.sh@1358 -- # fc=457407 00:27:02.994 05:42:06 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="fcfda0ed-0792-452c-8b9c-c24127eb1e73") .cluster_size' 00:27:03.254 05:42:06 -- common/autotest_common.sh@1359 -- # cs=4194304 00:27:03.254 05:42:06 -- common/autotest_common.sh@1362 -- # free_mb=1829628 00:27:03.254 05:42:06 -- common/autotest_common.sh@1363 -- # echo 1829628 00:27:03.254 1829628 00:27:03.254 05:42:06 -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:27:03.254 05:42:06 -- host/perf.sh@78 -- # free_mb=20480 00:27:03.254 05:42:06 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fcfda0ed-0792-452c-8b9c-c24127eb1e73 lbd_0 20480 00:27:03.254 05:42:06 -- host/perf.sh@80 -- # lb_guid=d6c2965e-1875-4e4a-b353-e1d799817319 00:27:03.254 05:42:06 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore d6c2965e-1875-4e4a-b353-e1d799817319 lvs_n_0 00:27:05.166 05:42:08 -- host/perf.sh@83 -- # ls_nested_guid=877db484-f8ac-4b47-8166-d2a1ac329412 00:27:05.166 05:42:08 -- host/perf.sh@84 -- # get_lvs_free_mb 877db484-f8ac-4b47-8166-d2a1ac329412 00:27:05.166 05:42:08 -- common/autotest_common.sh@1353 -- # local lvs_uuid=877db484-f8ac-4b47-8166-d2a1ac329412 00:27:05.166 05:42:08 -- common/autotest_common.sh@1354 -- # local lvs_info 00:27:05.166 05:42:08 -- common/autotest_common.sh@1355 -- # local fc 00:27:05.166 05:42:08 -- common/autotest_common.sh@1356 -- # local cs 00:27:05.166 05:42:08 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:05.166 05:42:08 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:27:05.166 { 00:27:05.166 "uuid": "fcfda0ed-0792-452c-8b9c-c24127eb1e73", 00:27:05.166 "name": "lvs_0", 00:27:05.166 "base_bdev": "Nvme0n1", 00:27:05.166 "total_data_clusters": 457407, 00:27:05.166 "free_clusters": 452287, 00:27:05.166 "block_size": 512, 00:27:05.166 "cluster_size": 4194304 00:27:05.166 }, 00:27:05.166 { 00:27:05.166 "uuid": "877db484-f8ac-4b47-8166-d2a1ac329412", 00:27:05.166 "name": "lvs_n_0", 00:27:05.166 "base_bdev": "d6c2965e-1875-4e4a-b353-e1d799817319", 00:27:05.166 "total_data_clusters": 5114, 00:27:05.166 "free_clusters": 5114, 00:27:05.166 "block_size": 512, 00:27:05.166 "cluster_size": 4194304 00:27:05.166 } 00:27:05.166 ]' 00:27:05.166 05:42:08 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="877db484-f8ac-4b47-8166-d2a1ac329412") .free_clusters' 00:27:05.166 05:42:08 -- common/autotest_common.sh@1358 -- # fc=5114 00:27:05.166 05:42:08 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="877db484-f8ac-4b47-8166-d2a1ac329412") .cluster_size' 00:27:05.166 05:42:08 -- common/autotest_common.sh@1359 -- # cs=4194304 00:27:05.166 05:42:08 -- common/autotest_common.sh@1362 -- # free_mb=20456 00:27:05.166 05:42:08 -- common/autotest_common.sh@1363 -- # echo 20456 00:27:05.167 20456 00:27:05.167 05:42:08 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:27:05.167 05:42:08 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 877db484-f8ac-4b47-8166-d2a1ac329412 lbd_nest_0 20456 00:27:05.427 05:42:08 -- host/perf.sh@88 -- # lb_nested_guid=76782d48-38ec-4813-a4f4-170084fb1e21 00:27:05.427 05:42:08 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:05.687 05:42:08 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:27:05.687 05:42:08 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 76782d48-38ec-4813-a4f4-170084fb1e21 00:27:05.687 05:42:08 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:05.947 05:42:09 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:27:05.947 05:42:09 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:27:05.947 05:42:09 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:05.947 05:42:09 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:05.947 05:42:09 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:05.947 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.198 Initializing NVMe Controllers 00:27:18.198 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:18.198 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:18.198 Initialization complete. Launching workers. 00:27:18.198 ======================================================== 00:27:18.198 Latency(us) 00:27:18.198 Device Information : IOPS MiB/s Average min max 00:27:18.198 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.10 0.02 21266.78 125.46 45650.11 00:27:18.198 ======================================================== 00:27:18.198 Total : 47.10 0.02 21266.78 125.46 45650.11 00:27:18.198 00:27:18.198 05:42:19 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:18.198 05:42:19 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:18.198 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.205 Initializing NVMe Controllers 00:27:28.205 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:28.205 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:28.205 Initialization complete. Launching workers. 00:27:28.205 ======================================================== 00:27:28.205 Latency(us) 00:27:28.205 Device Information : IOPS MiB/s Average min max 00:27:28.205 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 60.40 7.55 16568.36 7012.12 55867.91 00:27:28.205 ======================================================== 00:27:28.205 Total : 60.40 7.55 16568.36 7012.12 55867.91 00:27:28.205 00:27:28.205 05:42:29 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:28.205 05:42:29 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:28.205 05:42:29 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:28.205 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.203 Initializing NVMe Controllers 00:27:38.203 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:38.203 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:38.203 Initialization complete. Launching workers. 00:27:38.203 ======================================================== 00:27:38.203 Latency(us) 00:27:38.203 Device Information : IOPS MiB/s Average min max 00:27:38.203 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8751.63 4.27 3656.12 275.75 10894.63 00:27:38.203 ======================================================== 00:27:38.203 Total : 8751.63 4.27 3656.12 275.75 10894.63 00:27:38.203 00:27:38.203 05:42:40 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:38.203 05:42:40 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:38.203 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.353 Initializing NVMe Controllers 00:27:48.353 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:48.353 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:48.353 Initialization complete. Launching workers. 00:27:48.353 ======================================================== 00:27:48.353 Latency(us) 00:27:48.353 Device Information : IOPS MiB/s Average min max 00:27:48.353 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3829.20 478.65 8357.57 635.07 22409.23 00:27:48.353 ======================================================== 00:27:48.353 Total : 3829.20 478.65 8357.57 635.07 22409.23 00:27:48.353 00:27:48.353 05:42:50 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:48.353 05:42:50 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:48.353 05:42:50 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:48.353 EAL: No free 2048 kB hugepages reported on node 1 00:27:58.352 Initializing NVMe Controllers 00:27:58.352 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:58.352 Controller IO queue size 128, less than required. 00:27:58.352 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:58.352 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:58.352 Initialization complete. Launching workers. 00:27:58.352 ======================================================== 00:27:58.352 Latency(us) 00:27:58.352 Device Information : IOPS MiB/s Average min max 00:27:58.352 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15896.66 7.76 8058.09 1976.19 46598.91 00:27:58.352 ======================================================== 00:27:58.352 Total : 15896.66 7.76 8058.09 1976.19 46598.91 00:27:58.352 00:27:58.352 05:43:00 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:58.352 05:43:00 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:58.352 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.356 Initializing NVMe Controllers 00:28:08.356 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:08.356 Controller IO queue size 128, less than required. 00:28:08.356 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:08.356 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:08.356 Initialization complete. Launching workers. 00:28:08.356 ======================================================== 00:28:08.356 Latency(us) 00:28:08.356 Device Information : IOPS MiB/s Average min max 00:28:08.356 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1200.10 150.01 106895.32 15963.99 208746.88 00:28:08.356 ======================================================== 00:28:08.356 Total : 1200.10 150.01 106895.32 15963.99 208746.88 00:28:08.356 00:28:08.356 05:43:11 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:08.356 05:43:11 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 76782d48-38ec-4813-a4f4-170084fb1e21 00:28:10.269 05:43:13 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:28:10.269 05:43:13 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d6c2965e-1875-4e4a-b353-e1d799817319 00:28:10.269 05:43:13 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:28:10.530 05:43:13 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:28:10.530 05:43:13 -- host/perf.sh@114 -- # nvmftestfini 00:28:10.530 05:43:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:10.530 05:43:13 -- nvmf/common.sh@116 -- # sync 00:28:10.530 05:43:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:10.530 05:43:13 -- nvmf/common.sh@119 -- # set +e 00:28:10.530 05:43:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:10.530 05:43:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:10.530 rmmod nvme_tcp 00:28:10.530 rmmod nvme_fabrics 00:28:10.530 rmmod nvme_keyring 00:28:10.530 05:43:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:10.530 05:43:13 -- nvmf/common.sh@123 -- # set -e 00:28:10.530 05:43:13 -- nvmf/common.sh@124 -- # return 0 00:28:10.530 05:43:13 -- nvmf/common.sh@477 -- # '[' -n 1961346 ']' 00:28:10.530 05:43:13 -- nvmf/common.sh@478 -- # killprocess 1961346 00:28:10.530 05:43:13 -- common/autotest_common.sh@936 -- # '[' -z 1961346 ']' 00:28:10.530 05:43:13 -- common/autotest_common.sh@940 -- # kill -0 1961346 00:28:10.530 05:43:13 -- common/autotest_common.sh@941 -- # uname 00:28:10.530 05:43:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:10.530 05:43:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1961346 00:28:10.530 05:43:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:10.530 05:43:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:10.530 05:43:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1961346' 00:28:10.530 killing process with pid 1961346 00:28:10.530 05:43:13 -- common/autotest_common.sh@955 -- # kill 1961346 00:28:10.530 05:43:13 -- common/autotest_common.sh@960 -- # wait 1961346 00:28:13.072 05:43:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:13.072 05:43:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:13.072 05:43:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:13.072 05:43:15 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:13.072 05:43:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:13.072 05:43:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.072 05:43:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:13.072 05:43:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.984 05:43:17 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:14.984 00:28:14.984 real 1m32.430s 00:28:14.984 user 5m25.615s 00:28:14.984 sys 0m15.442s 00:28:14.984 05:43:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:14.984 05:43:17 -- common/autotest_common.sh@10 -- # set +x 00:28:14.984 ************************************ 00:28:14.984 END TEST nvmf_perf 00:28:14.984 ************************************ 00:28:14.984 05:43:17 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:14.984 05:43:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:14.984 05:43:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:14.984 05:43:17 -- common/autotest_common.sh@10 -- # set +x 00:28:14.984 ************************************ 00:28:14.984 START TEST nvmf_fio_host 00:28:14.984 ************************************ 00:28:14.984 05:43:17 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:14.984 * Looking for test storage... 00:28:14.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:14.984 05:43:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:14.984 05:43:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:14.984 05:43:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:14.984 05:43:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:14.984 05:43:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:14.984 05:43:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:14.984 05:43:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:14.984 05:43:17 -- scripts/common.sh@335 -- # IFS=.-: 00:28:14.984 05:43:17 -- scripts/common.sh@335 -- # read -ra ver1 00:28:14.984 05:43:17 -- scripts/common.sh@336 -- # IFS=.-: 00:28:14.984 05:43:17 -- scripts/common.sh@336 -- # read -ra ver2 00:28:14.984 05:43:17 -- scripts/common.sh@337 -- # local 'op=<' 00:28:14.984 05:43:17 -- scripts/common.sh@339 -- # ver1_l=2 00:28:14.984 05:43:17 -- scripts/common.sh@340 -- # ver2_l=1 00:28:14.984 05:43:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:14.984 05:43:17 -- scripts/common.sh@343 -- # case "$op" in 00:28:14.984 05:43:17 -- scripts/common.sh@344 -- # : 1 00:28:14.984 05:43:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:14.984 05:43:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:14.984 05:43:17 -- scripts/common.sh@364 -- # decimal 1 00:28:14.984 05:43:17 -- scripts/common.sh@352 -- # local d=1 00:28:14.984 05:43:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:14.984 05:43:17 -- scripts/common.sh@354 -- # echo 1 00:28:14.984 05:43:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:14.984 05:43:17 -- scripts/common.sh@365 -- # decimal 2 00:28:14.984 05:43:17 -- scripts/common.sh@352 -- # local d=2 00:28:14.984 05:43:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:14.984 05:43:17 -- scripts/common.sh@354 -- # echo 2 00:28:14.984 05:43:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:14.984 05:43:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:14.984 05:43:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:14.984 05:43:18 -- scripts/common.sh@367 -- # return 0 00:28:14.984 05:43:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:14.984 05:43:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:14.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.984 --rc genhtml_branch_coverage=1 00:28:14.984 --rc genhtml_function_coverage=1 00:28:14.984 --rc genhtml_legend=1 00:28:14.984 --rc geninfo_all_blocks=1 00:28:14.984 --rc geninfo_unexecuted_blocks=1 00:28:14.984 00:28:14.984 ' 00:28:14.984 05:43:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:14.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.984 --rc genhtml_branch_coverage=1 00:28:14.984 --rc genhtml_function_coverage=1 00:28:14.984 --rc genhtml_legend=1 00:28:14.984 --rc geninfo_all_blocks=1 00:28:14.984 --rc geninfo_unexecuted_blocks=1 00:28:14.984 00:28:14.984 ' 00:28:14.984 05:43:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:14.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.984 --rc genhtml_branch_coverage=1 00:28:14.984 --rc genhtml_function_coverage=1 00:28:14.984 --rc genhtml_legend=1 00:28:14.984 --rc geninfo_all_blocks=1 00:28:14.984 --rc geninfo_unexecuted_blocks=1 00:28:14.984 00:28:14.984 ' 00:28:14.984 05:43:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:14.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.984 --rc genhtml_branch_coverage=1 00:28:14.984 --rc genhtml_function_coverage=1 00:28:14.984 --rc genhtml_legend=1 00:28:14.984 --rc geninfo_all_blocks=1 00:28:14.984 --rc geninfo_unexecuted_blocks=1 00:28:14.984 00:28:14.984 ' 00:28:14.984 05:43:18 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:14.984 05:43:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:14.984 05:43:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:14.984 05:43:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:14.984 05:43:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.984 05:43:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.984 05:43:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.984 05:43:18 -- paths/export.sh@5 -- # export PATH 00:28:14.984 05:43:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.984 05:43:18 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:14.984 05:43:18 -- nvmf/common.sh@7 -- # uname -s 00:28:14.984 05:43:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:14.984 05:43:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:14.984 05:43:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:14.984 05:43:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:14.984 05:43:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:14.984 05:43:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:14.984 05:43:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:14.984 05:43:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:14.984 05:43:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:14.984 05:43:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:14.984 05:43:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:14.984 05:43:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:14.984 05:43:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:14.984 05:43:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:14.984 05:43:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:14.984 05:43:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:14.984 05:43:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:14.984 05:43:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:14.984 05:43:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:14.984 05:43:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.984 05:43:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.984 05:43:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.984 05:43:18 -- paths/export.sh@5 -- # export PATH 00:28:14.984 05:43:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.984 05:43:18 -- nvmf/common.sh@46 -- # : 0 00:28:14.984 05:43:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:14.984 05:43:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:14.984 05:43:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:14.984 05:43:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:14.984 05:43:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:14.984 05:43:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:14.984 05:43:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:14.984 05:43:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:14.984 05:43:18 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:14.984 05:43:18 -- host/fio.sh@14 -- # nvmftestinit 00:28:14.984 05:43:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:14.984 05:43:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:14.984 05:43:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:14.984 05:43:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:14.984 05:43:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:14.984 05:43:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.985 05:43:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:14.985 05:43:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.985 05:43:18 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:14.985 05:43:18 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:14.985 05:43:18 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:14.985 05:43:18 -- common/autotest_common.sh@10 -- # set +x 00:28:23.127 05:43:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:23.127 05:43:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:23.127 05:43:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:23.127 05:43:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:23.127 05:43:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:23.127 05:43:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:23.127 05:43:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:23.127 05:43:25 -- nvmf/common.sh@294 -- # net_devs=() 00:28:23.127 05:43:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:23.127 05:43:25 -- nvmf/common.sh@295 -- # e810=() 00:28:23.127 05:43:25 -- nvmf/common.sh@295 -- # local -ga e810 00:28:23.127 05:43:25 -- nvmf/common.sh@296 -- # x722=() 00:28:23.127 05:43:25 -- nvmf/common.sh@296 -- # local -ga x722 00:28:23.127 05:43:25 -- nvmf/common.sh@297 -- # mlx=() 00:28:23.127 05:43:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:23.127 05:43:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:23.127 05:43:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:23.127 05:43:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:23.127 05:43:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:23.127 05:43:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:23.127 05:43:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:23.127 05:43:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:23.127 05:43:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:23.127 05:43:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:23.127 05:43:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:23.127 05:43:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:23.127 05:43:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:23.127 05:43:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:23.127 05:43:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:23.127 05:43:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:23.127 05:43:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:23.127 05:43:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:23.127 05:43:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:23.127 05:43:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:23.127 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:23.127 05:43:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:23.127 05:43:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:23.127 05:43:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:23.127 05:43:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:23.127 05:43:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:23.127 05:43:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:23.127 05:43:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:23.127 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:23.127 05:43:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:23.127 05:43:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:23.127 05:43:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:23.127 05:43:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:23.127 05:43:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:23.127 05:43:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:23.127 05:43:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:23.127 05:43:25 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:23.127 05:43:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:23.127 05:43:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:23.127 05:43:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:23.127 05:43:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:23.127 05:43:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:23.127 Found net devices under 0000:31:00.0: cvl_0_0 00:28:23.127 05:43:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:23.127 05:43:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:23.127 05:43:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:23.128 05:43:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:23.128 05:43:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:23.128 05:43:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:23.128 Found net devices under 0000:31:00.1: cvl_0_1 00:28:23.128 05:43:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:23.128 05:43:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:23.128 05:43:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:23.128 05:43:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:23.128 05:43:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:23.128 05:43:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:23.128 05:43:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:23.128 05:43:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:23.128 05:43:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:23.128 05:43:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:23.128 05:43:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:23.128 05:43:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:23.128 05:43:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:23.128 05:43:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:23.128 05:43:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:23.128 05:43:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:23.128 05:43:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:23.128 05:43:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:23.128 05:43:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:23.128 05:43:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:23.128 05:43:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:23.128 05:43:25 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:23.128 05:43:25 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:23.128 05:43:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:23.128 05:43:25 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:23.128 05:43:25 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:23.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:23.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:28:23.128 00:28:23.128 --- 10.0.0.2 ping statistics --- 00:28:23.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:23.128 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:28:23.128 05:43:25 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:23.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:23.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:28:23.128 00:28:23.128 --- 10.0.0.1 ping statistics --- 00:28:23.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:23.128 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:28:23.128 05:43:25 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:23.128 05:43:25 -- nvmf/common.sh@410 -- # return 0 00:28:23.128 05:43:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:23.128 05:43:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:23.128 05:43:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:23.128 05:43:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:23.128 05:43:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:23.128 05:43:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:23.128 05:43:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:23.128 05:43:25 -- host/fio.sh@16 -- # [[ y != y ]] 00:28:23.128 05:43:25 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:28:23.128 05:43:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:23.128 05:43:25 -- common/autotest_common.sh@10 -- # set +x 00:28:23.128 05:43:25 -- host/fio.sh@24 -- # nvmfpid=1982242 00:28:23.128 05:43:25 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:23.128 05:43:25 -- host/fio.sh@28 -- # waitforlisten 1982242 00:28:23.128 05:43:25 -- common/autotest_common.sh@829 -- # '[' -z 1982242 ']' 00:28:23.128 05:43:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:23.128 05:43:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:23.128 05:43:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:23.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:23.128 05:43:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:23.128 05:43:25 -- common/autotest_common.sh@10 -- # set +x 00:28:23.128 05:43:25 -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:23.128 [2024-12-07 05:43:25.593290] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:23.128 [2024-12-07 05:43:25.593384] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:23.128 EAL: No free 2048 kB hugepages reported on node 1 00:28:23.128 [2024-12-07 05:43:25.671748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:23.128 [2024-12-07 05:43:25.744902] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:23.128 [2024-12-07 05:43:25.745045] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:23.128 [2024-12-07 05:43:25.745057] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:23.128 [2024-12-07 05:43:25.745067] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:23.128 [2024-12-07 05:43:25.745231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:23.128 [2024-12-07 05:43:25.745347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:23.128 [2024-12-07 05:43:25.745503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.128 [2024-12-07 05:43:25.745504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:23.389 05:43:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:23.389 05:43:26 -- common/autotest_common.sh@862 -- # return 0 00:28:23.389 05:43:26 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:23.389 [2024-12-07 05:43:26.529517] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:23.389 05:43:26 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:28:23.389 05:43:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:23.389 05:43:26 -- common/autotest_common.sh@10 -- # set +x 00:28:23.389 05:43:26 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:28:23.650 Malloc1 00:28:23.650 05:43:26 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:23.910 05:43:26 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:23.910 05:43:27 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:24.171 [2024-12-07 05:43:27.275709] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:24.171 05:43:27 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:24.432 05:43:27 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:24.432 05:43:27 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:24.432 05:43:27 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:24.432 05:43:27 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:28:24.432 05:43:27 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:24.432 05:43:27 -- common/autotest_common.sh@1328 -- # local sanitizers 00:28:24.432 05:43:27 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:24.432 05:43:27 -- common/autotest_common.sh@1330 -- # shift 00:28:24.432 05:43:27 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:28:24.432 05:43:27 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:28:24.432 05:43:27 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:24.432 05:43:27 -- common/autotest_common.sh@1334 -- # grep libasan 00:28:24.432 05:43:27 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:28:24.432 05:43:27 -- common/autotest_common.sh@1334 -- # asan_lib= 00:28:24.432 05:43:27 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:28:24.432 05:43:27 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:28:24.432 05:43:27 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:24.432 05:43:27 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:28:24.432 05:43:27 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:28:24.432 05:43:27 -- common/autotest_common.sh@1334 -- # asan_lib= 00:28:24.433 05:43:27 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:28:24.433 05:43:27 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:24.433 05:43:27 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:24.693 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:24.693 fio-3.35 00:28:24.693 Starting 1 thread 00:28:24.693 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.252 00:28:27.252 test: (groupid=0, jobs=1): err= 0: pid=1982825: Sat Dec 7 05:43:30 2024 00:28:27.252 read: IOPS=13.7k, BW=53.5MiB/s (56.1MB/s)(107MiB/2004msec) 00:28:27.252 slat (usec): min=2, max=271, avg= 2.16, stdev= 2.31 00:28:27.252 clat (usec): min=3373, max=8183, avg=5137.54, stdev=901.92 00:28:27.252 lat (usec): min=3376, max=8185, avg=5139.70, stdev=901.98 00:28:27.252 clat percentiles (usec): 00:28:27.252 | 1.00th=[ 4015], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4555], 00:28:27.252 | 30.00th=[ 4621], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4948], 00:28:27.252 | 70.00th=[ 5080], 80.00th=[ 5538], 90.00th=[ 6783], 95.00th=[ 7177], 00:28:27.252 | 99.00th=[ 7570], 99.50th=[ 7767], 99.90th=[ 7963], 99.95th=[ 8029], 00:28:27.252 | 99.99th=[ 8160] 00:28:27.252 bw ( KiB/s): min=40168, max=60048, per=99.94%, avg=54774.00, stdev=9743.03, samples=4 00:28:27.252 iops : min=10042, max=15012, avg=13693.50, stdev=2435.76, samples=4 00:28:27.252 write: IOPS=13.7k, BW=53.4MiB/s (56.0MB/s)(107MiB/2004msec); 0 zone resets 00:28:27.252 slat (usec): min=2, max=269, avg= 2.22, stdev= 1.80 00:28:27.252 clat (usec): min=2842, max=7557, avg=4144.72, stdev=719.00 00:28:27.252 lat (usec): min=2860, max=7559, avg=4146.94, stdev=719.10 00:28:27.252 clat percentiles (usec): 00:28:27.252 | 1.00th=[ 3228], 5.00th=[ 3425], 10.00th=[ 3523], 20.00th=[ 3654], 00:28:27.252 | 30.00th=[ 3752], 40.00th=[ 3818], 50.00th=[ 3884], 60.00th=[ 3982], 00:28:27.252 | 70.00th=[ 4113], 80.00th=[ 4490], 90.00th=[ 5473], 95.00th=[ 5735], 00:28:27.252 | 99.00th=[ 6063], 99.50th=[ 6194], 99.90th=[ 6521], 99.95th=[ 6521], 00:28:27.252 | 99.99th=[ 6915] 00:28:27.252 bw ( KiB/s): min=40712, max=60248, per=100.00%, avg=54708.00, stdev=9355.76, samples=4 00:28:27.252 iops : min=10178, max=15062, avg=13677.00, stdev=2338.94, samples=4 00:28:27.252 lat (msec) : 4=31.25%, 10=68.75% 00:28:27.252 cpu : usr=73.99%, sys=24.66%, ctx=16, majf=0, minf=6 00:28:27.252 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:28:27.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:27.252 issued rwts: total=27459,27409,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:27.252 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:27.252 00:28:27.252 Run status group 0 (all jobs): 00:28:27.252 READ: bw=53.5MiB/s (56.1MB/s), 53.5MiB/s-53.5MiB/s (56.1MB/s-56.1MB/s), io=107MiB (112MB), run=2004-2004msec 00:28:27.252 WRITE: bw=53.4MiB/s (56.0MB/s), 53.4MiB/s-53.4MiB/s (56.0MB/s-56.0MB/s), io=107MiB (112MB), run=2004-2004msec 00:28:27.252 05:43:30 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:27.252 05:43:30 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:27.252 05:43:30 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:28:27.252 05:43:30 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:27.252 05:43:30 -- common/autotest_common.sh@1328 -- # local sanitizers 00:28:27.252 05:43:30 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:27.252 05:43:30 -- common/autotest_common.sh@1330 -- # shift 00:28:27.252 05:43:30 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:28:27.252 05:43:30 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:28:27.252 05:43:30 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:27.252 05:43:30 -- common/autotest_common.sh@1334 -- # grep libasan 00:28:27.252 05:43:30 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:28:27.252 05:43:30 -- common/autotest_common.sh@1334 -- # asan_lib= 00:28:27.252 05:43:30 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:28:27.252 05:43:30 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:28:27.252 05:43:30 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:27.252 05:43:30 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:28:27.252 05:43:30 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:28:27.252 05:43:30 -- common/autotest_common.sh@1334 -- # asan_lib= 00:28:27.252 05:43:30 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:28:27.252 05:43:30 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:27.252 05:43:30 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:27.514 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:28:27.514 fio-3.35 00:28:27.514 Starting 1 thread 00:28:27.514 EAL: No free 2048 kB hugepages reported on node 1 00:28:30.060 00:28:30.060 test: (groupid=0, jobs=1): err= 0: pid=1983611: Sat Dec 7 05:43:33 2024 00:28:30.060 read: IOPS=9335, BW=146MiB/s (153MB/s)(292MiB/2005msec) 00:28:30.060 slat (usec): min=3, max=109, avg= 3.67, stdev= 1.77 00:28:30.060 clat (usec): min=1302, max=16372, avg=8406.68, stdev=2067.95 00:28:30.060 lat (usec): min=1305, max=16375, avg=8410.35, stdev=2068.14 00:28:30.060 clat percentiles (usec): 00:28:30.060 | 1.00th=[ 4228], 5.00th=[ 5342], 10.00th=[ 5866], 20.00th=[ 6521], 00:28:30.060 | 30.00th=[ 7111], 40.00th=[ 7701], 50.00th=[ 8291], 60.00th=[ 8848], 00:28:30.060 | 70.00th=[ 9634], 80.00th=[10290], 90.00th=[11076], 95.00th=[11731], 00:28:30.060 | 99.00th=[13566], 99.50th=[14484], 99.90th=[15533], 99.95th=[15664], 00:28:30.060 | 99.99th=[16057] 00:28:30.060 bw ( KiB/s): min=65088, max=86752, per=49.16%, avg=73432.00, stdev=9332.17, samples=4 00:28:30.060 iops : min= 4068, max= 5422, avg=4589.50, stdev=583.26, samples=4 00:28:30.060 write: IOPS=5678, BW=88.7MiB/s (93.0MB/s)(150MiB/1688msec); 0 zone resets 00:28:30.060 slat (usec): min=39, max=398, avg=41.08, stdev= 7.98 00:28:30.060 clat (usec): min=1323, max=17261, avg=9244.50, stdev=1500.63 00:28:30.060 lat (usec): min=1363, max=17302, avg=9285.59, stdev=1502.68 00:28:30.060 clat percentiles (usec): 00:28:30.060 | 1.00th=[ 6194], 5.00th=[ 7177], 10.00th=[ 7570], 20.00th=[ 8029], 00:28:30.060 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9503], 00:28:30.060 | 70.00th=[ 9896], 80.00th=[10421], 90.00th=[11076], 95.00th=[11994], 00:28:30.060 | 99.00th=[13435], 99.50th=[14615], 99.90th=[16057], 99.95th=[17171], 00:28:30.060 | 99.99th=[17171] 00:28:30.060 bw ( KiB/s): min=68000, max=90336, per=83.93%, avg=76256.00, stdev=9732.88, samples=4 00:28:30.060 iops : min= 4250, max= 5646, avg=4766.00, stdev=608.30, samples=4 00:28:30.060 lat (msec) : 2=0.05%, 4=0.39%, 10=74.60%, 20=24.96% 00:28:30.060 cpu : usr=84.88%, sys=13.52%, ctx=12, majf=0, minf=28 00:28:30.060 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:28:30.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:30.060 issued rwts: total=18718,9585,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.060 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:30.060 00:28:30.060 Run status group 0 (all jobs): 00:28:30.060 READ: bw=146MiB/s (153MB/s), 146MiB/s-146MiB/s (153MB/s-153MB/s), io=292MiB (307MB), run=2005-2005msec 00:28:30.060 WRITE: bw=88.7MiB/s (93.0MB/s), 88.7MiB/s-88.7MiB/s (93.0MB/s-93.0MB/s), io=150MiB (157MB), run=1688-1688msec 00:28:30.060 05:43:33 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:30.060 05:43:33 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:28:30.060 05:43:33 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:28:30.060 05:43:33 -- host/fio.sh@51 -- # get_nvme_bdfs 00:28:30.060 05:43:33 -- common/autotest_common.sh@1508 -- # bdfs=() 00:28:30.060 05:43:33 -- common/autotest_common.sh@1508 -- # local bdfs 00:28:30.060 05:43:33 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:30.060 05:43:33 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:30.060 05:43:33 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:28:30.321 05:43:33 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:28:30.321 05:43:33 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:65:00.0 00:28:30.321 05:43:33 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:28:30.581 Nvme0n1 00:28:30.842 05:43:33 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:28:31.414 05:43:34 -- host/fio.sh@53 -- # ls_guid=0f98e759-29fc-4595-a7fa-8bf03873ca04 00:28:31.414 05:43:34 -- host/fio.sh@54 -- # get_lvs_free_mb 0f98e759-29fc-4595-a7fa-8bf03873ca04 00:28:31.414 05:43:34 -- common/autotest_common.sh@1353 -- # local lvs_uuid=0f98e759-29fc-4595-a7fa-8bf03873ca04 00:28:31.414 05:43:34 -- common/autotest_common.sh@1354 -- # local lvs_info 00:28:31.414 05:43:34 -- common/autotest_common.sh@1355 -- # local fc 00:28:31.414 05:43:34 -- common/autotest_common.sh@1356 -- # local cs 00:28:31.414 05:43:34 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:31.414 05:43:34 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:28:31.414 { 00:28:31.414 "uuid": "0f98e759-29fc-4595-a7fa-8bf03873ca04", 00:28:31.414 "name": "lvs_0", 00:28:31.414 "base_bdev": "Nvme0n1", 00:28:31.414 "total_data_clusters": 1787, 00:28:31.414 "free_clusters": 1787, 00:28:31.414 "block_size": 512, 00:28:31.414 "cluster_size": 1073741824 00:28:31.414 } 00:28:31.414 ]' 00:28:31.414 05:43:34 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="0f98e759-29fc-4595-a7fa-8bf03873ca04") .free_clusters' 00:28:31.414 05:43:34 -- common/autotest_common.sh@1358 -- # fc=1787 00:28:31.414 05:43:34 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="0f98e759-29fc-4595-a7fa-8bf03873ca04") .cluster_size' 00:28:31.674 05:43:34 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:28:31.674 05:43:34 -- common/autotest_common.sh@1362 -- # free_mb=1829888 00:28:31.674 05:43:34 -- common/autotest_common.sh@1363 -- # echo 1829888 00:28:31.674 1829888 00:28:31.674 05:43:34 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1829888 00:28:31.674 e22d1129-0f9e-45f2-bd8e-c46ece683a5e 00:28:31.674 05:43:34 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:28:31.935 05:43:34 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:28:31.936 05:43:35 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:32.196 05:43:35 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:32.196 05:43:35 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:32.196 05:43:35 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:28:32.196 05:43:35 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:32.196 05:43:35 -- common/autotest_common.sh@1328 -- # local sanitizers 00:28:32.196 05:43:35 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:32.196 05:43:35 -- common/autotest_common.sh@1330 -- # shift 00:28:32.196 05:43:35 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:28:32.196 05:43:35 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:28:32.196 05:43:35 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:32.196 05:43:35 -- common/autotest_common.sh@1334 -- # grep libasan 00:28:32.196 05:43:35 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:28:32.196 05:43:35 -- common/autotest_common.sh@1334 -- # asan_lib= 00:28:32.196 05:43:35 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:28:32.196 05:43:35 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:28:32.196 05:43:35 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:32.196 05:43:35 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:28:32.196 05:43:35 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:28:32.196 05:43:35 -- common/autotest_common.sh@1334 -- # asan_lib= 00:28:32.196 05:43:35 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:28:32.196 05:43:35 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:32.196 05:43:35 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:32.764 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:32.764 fio-3.35 00:28:32.764 Starting 1 thread 00:28:32.764 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.311 00:28:35.311 test: (groupid=0, jobs=1): err= 0: pid=1984823: Sat Dec 7 05:43:38 2024 00:28:35.311 read: IOPS=11.1k, BW=43.2MiB/s (45.3MB/s)(86.6MiB/2005msec) 00:28:35.311 slat (usec): min=2, max=113, avg= 2.21, stdev= 1.05 00:28:35.311 clat (usec): min=1882, max=10422, avg=6395.92, stdev=479.55 00:28:35.311 lat (usec): min=1900, max=10425, avg=6398.13, stdev=479.50 00:28:35.311 clat percentiles (usec): 00:28:35.312 | 1.00th=[ 5276], 5.00th=[ 5604], 10.00th=[ 5800], 20.00th=[ 5997], 00:28:35.312 | 30.00th=[ 6128], 40.00th=[ 6259], 50.00th=[ 6390], 60.00th=[ 6521], 00:28:35.312 | 70.00th=[ 6652], 80.00th=[ 6783], 90.00th=[ 6980], 95.00th=[ 7111], 00:28:35.312 | 99.00th=[ 7439], 99.50th=[ 7635], 99.90th=[ 8356], 99.95th=[ 9372], 00:28:35.312 | 99.99th=[10290] 00:28:35.312 bw ( KiB/s): min=43024, max=44792, per=99.97%, avg=44202.00, stdev=812.89, samples=4 00:28:35.312 iops : min=10756, max=11198, avg=11050.50, stdev=203.22, samples=4 00:28:35.312 write: IOPS=11.0k, BW=43.1MiB/s (45.2MB/s)(86.4MiB/2005msec); 0 zone resets 00:28:35.312 slat (nsec): min=2093, max=131775, avg=2271.90, stdev=915.21 00:28:35.312 clat (usec): min=1058, max=10125, avg=5117.83, stdev=422.54 00:28:35.312 lat (usec): min=1065, max=10128, avg=5120.11, stdev=422.52 00:28:35.312 clat percentiles (usec): 00:28:35.312 | 1.00th=[ 4146], 5.00th=[ 4490], 10.00th=[ 4621], 20.00th=[ 4817], 00:28:35.312 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5145], 60.00th=[ 5211], 00:28:35.312 | 70.00th=[ 5342], 80.00th=[ 5473], 90.00th=[ 5604], 95.00th=[ 5735], 00:28:35.312 | 99.00th=[ 6063], 99.50th=[ 6194], 99.90th=[ 7701], 99.95th=[ 8717], 00:28:35.312 | 99.99th=[ 9503] 00:28:35.312 bw ( KiB/s): min=43408, max=44792, per=99.96%, avg=44084.00, stdev=565.40, samples=4 00:28:35.312 iops : min=10852, max=11198, avg=11021.00, stdev=141.35, samples=4 00:28:35.312 lat (msec) : 2=0.03%, 4=0.19%, 10=99.76%, 20=0.02% 00:28:35.312 cpu : usr=73.80%, sys=25.20%, ctx=41, majf=0, minf=15 00:28:35.312 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:28:35.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.312 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:35.312 issued rwts: total=22164,22106,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.312 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:35.312 00:28:35.312 Run status group 0 (all jobs): 00:28:35.312 READ: bw=43.2MiB/s (45.3MB/s), 43.2MiB/s-43.2MiB/s (45.3MB/s-45.3MB/s), io=86.6MiB (90.8MB), run=2005-2005msec 00:28:35.312 WRITE: bw=43.1MiB/s (45.2MB/s), 43.1MiB/s-43.1MiB/s (45.2MB/s-45.2MB/s), io=86.4MiB (90.5MB), run=2005-2005msec 00:28:35.312 05:43:38 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:35.312 05:43:38 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:28:35.887 05:43:39 -- host/fio.sh@64 -- # ls_nested_guid=6f9c154f-009e-4260-b5f5-5885a294042b 00:28:35.887 05:43:39 -- host/fio.sh@65 -- # get_lvs_free_mb 6f9c154f-009e-4260-b5f5-5885a294042b 00:28:35.887 05:43:39 -- common/autotest_common.sh@1353 -- # local lvs_uuid=6f9c154f-009e-4260-b5f5-5885a294042b 00:28:35.887 05:43:39 -- common/autotest_common.sh@1354 -- # local lvs_info 00:28:35.887 05:43:39 -- common/autotest_common.sh@1355 -- # local fc 00:28:35.887 05:43:39 -- common/autotest_common.sh@1356 -- # local cs 00:28:35.887 05:43:39 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:36.148 05:43:39 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:28:36.148 { 00:28:36.148 "uuid": "0f98e759-29fc-4595-a7fa-8bf03873ca04", 00:28:36.148 "name": "lvs_0", 00:28:36.148 "base_bdev": "Nvme0n1", 00:28:36.148 "total_data_clusters": 1787, 00:28:36.148 "free_clusters": 0, 00:28:36.148 "block_size": 512, 00:28:36.148 "cluster_size": 1073741824 00:28:36.148 }, 00:28:36.148 { 00:28:36.148 "uuid": "6f9c154f-009e-4260-b5f5-5885a294042b", 00:28:36.148 "name": "lvs_n_0", 00:28:36.148 "base_bdev": "e22d1129-0f9e-45f2-bd8e-c46ece683a5e", 00:28:36.148 "total_data_clusters": 457025, 00:28:36.148 "free_clusters": 457025, 00:28:36.148 "block_size": 512, 00:28:36.148 "cluster_size": 4194304 00:28:36.148 } 00:28:36.148 ]' 00:28:36.148 05:43:39 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="6f9c154f-009e-4260-b5f5-5885a294042b") .free_clusters' 00:28:36.148 05:43:39 -- common/autotest_common.sh@1358 -- # fc=457025 00:28:36.148 05:43:39 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="6f9c154f-009e-4260-b5f5-5885a294042b") .cluster_size' 00:28:36.148 05:43:39 -- common/autotest_common.sh@1359 -- # cs=4194304 00:28:36.148 05:43:39 -- common/autotest_common.sh@1362 -- # free_mb=1828100 00:28:36.148 05:43:39 -- common/autotest_common.sh@1363 -- # echo 1828100 00:28:36.148 1828100 00:28:36.148 05:43:39 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:28:37.088 0b061938-d2f7-4a0a-8b8b-b2042cc7efda 00:28:37.347 05:43:40 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:28:37.347 05:43:40 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:28:37.608 05:43:40 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:28:37.871 05:43:40 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:37.871 05:43:40 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:37.871 05:43:40 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:28:37.871 05:43:40 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:37.871 05:43:40 -- common/autotest_common.sh@1328 -- # local sanitizers 00:28:37.871 05:43:40 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:37.871 05:43:40 -- common/autotest_common.sh@1330 -- # shift 00:28:37.871 05:43:40 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:28:37.871 05:43:40 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:28:37.871 05:43:40 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:37.871 05:43:40 -- common/autotest_common.sh@1334 -- # grep libasan 00:28:37.871 05:43:40 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:28:37.871 05:43:40 -- common/autotest_common.sh@1334 -- # asan_lib= 00:28:37.871 05:43:40 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:28:37.871 05:43:40 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:28:37.871 05:43:40 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:37.871 05:43:40 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:28:37.871 05:43:40 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:28:37.871 05:43:40 -- common/autotest_common.sh@1334 -- # asan_lib= 00:28:37.871 05:43:40 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:28:37.871 05:43:40 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:37.871 05:43:40 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:38.134 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:38.134 fio-3.35 00:28:38.134 Starting 1 thread 00:28:38.134 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.678 00:28:40.678 test: (groupid=0, jobs=1): err= 0: pid=1986015: Sat Dec 7 05:43:43 2024 00:28:40.678 read: IOPS=9767, BW=38.2MiB/s (40.0MB/s)(76.5MiB/2005msec) 00:28:40.678 slat (usec): min=2, max=110, avg= 2.22, stdev= 1.04 00:28:40.678 clat (usec): min=2048, max=12011, avg=7241.76, stdev=562.52 00:28:40.678 lat (usec): min=2065, max=12013, avg=7243.98, stdev=562.47 00:28:40.678 clat percentiles (usec): 00:28:40.678 | 1.00th=[ 5932], 5.00th=[ 6325], 10.00th=[ 6587], 20.00th=[ 6783], 00:28:40.678 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7373], 00:28:40.678 | 70.00th=[ 7504], 80.00th=[ 7701], 90.00th=[ 7898], 95.00th=[ 8094], 00:28:40.678 | 99.00th=[ 8586], 99.50th=[ 8717], 99.90th=[10552], 99.95th=[11207], 00:28:40.678 | 99.99th=[11469] 00:28:40.678 bw ( KiB/s): min=37928, max=39704, per=99.89%, avg=39026.00, stdev=788.43, samples=4 00:28:40.678 iops : min= 9482, max= 9926, avg=9756.50, stdev=197.11, samples=4 00:28:40.678 write: IOPS=9777, BW=38.2MiB/s (40.0MB/s)(76.6MiB/2005msec); 0 zone resets 00:28:40.678 slat (nsec): min=2087, max=95064, avg=2285.26, stdev=726.56 00:28:40.678 clat (usec): min=1023, max=10593, avg=5775.69, stdev=487.74 00:28:40.678 lat (usec): min=1030, max=10595, avg=5777.98, stdev=487.71 00:28:40.678 clat percentiles (usec): 00:28:40.678 | 1.00th=[ 4621], 5.00th=[ 5014], 10.00th=[ 5211], 20.00th=[ 5407], 00:28:40.679 | 30.00th=[ 5538], 40.00th=[ 5669], 50.00th=[ 5800], 60.00th=[ 5866], 00:28:40.679 | 70.00th=[ 5997], 80.00th=[ 6128], 90.00th=[ 6325], 95.00th=[ 6521], 00:28:40.679 | 99.00th=[ 6849], 99.50th=[ 6980], 99.90th=[ 8717], 99.95th=[ 9896], 00:28:40.679 | 99.99th=[10552] 00:28:40.679 bw ( KiB/s): min=38544, max=39656, per=99.96%, avg=39092.00, stdev=459.22, samples=4 00:28:40.679 iops : min= 9636, max= 9914, avg=9773.00, stdev=114.80, samples=4 00:28:40.679 lat (msec) : 2=0.01%, 4=0.13%, 10=99.79%, 20=0.08% 00:28:40.679 cpu : usr=73.90%, sys=25.15%, ctx=63, majf=0, minf=15 00:28:40.679 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:28:40.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:40.679 issued rwts: total=19583,19603,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.679 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.679 00:28:40.679 Run status group 0 (all jobs): 00:28:40.679 READ: bw=38.2MiB/s (40.0MB/s), 38.2MiB/s-38.2MiB/s (40.0MB/s-40.0MB/s), io=76.5MiB (80.2MB), run=2005-2005msec 00:28:40.679 WRITE: bw=38.2MiB/s (40.0MB/s), 38.2MiB/s-38.2MiB/s (40.0MB/s-40.0MB/s), io=76.6MiB (80.3MB), run=2005-2005msec 00:28:40.679 05:43:43 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:28:40.679 05:43:43 -- host/fio.sh@74 -- # sync 00:28:40.679 05:43:43 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:28:42.592 05:43:45 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:28:42.852 05:43:45 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:28:43.423 05:43:46 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:28:43.683 05:43:46 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:28:45.597 05:43:48 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:45.597 05:43:48 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:28:45.597 05:43:48 -- host/fio.sh@86 -- # nvmftestfini 00:28:45.597 05:43:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:45.597 05:43:48 -- nvmf/common.sh@116 -- # sync 00:28:45.597 05:43:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:45.597 05:43:48 -- nvmf/common.sh@119 -- # set +e 00:28:45.597 05:43:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:45.597 05:43:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:45.597 rmmod nvme_tcp 00:28:45.597 rmmod nvme_fabrics 00:28:45.597 rmmod nvme_keyring 00:28:45.597 05:43:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:45.597 05:43:48 -- nvmf/common.sh@123 -- # set -e 00:28:45.597 05:43:48 -- nvmf/common.sh@124 -- # return 0 00:28:45.597 05:43:48 -- nvmf/common.sh@477 -- # '[' -n 1982242 ']' 00:28:45.597 05:43:48 -- nvmf/common.sh@478 -- # killprocess 1982242 00:28:45.597 05:43:48 -- common/autotest_common.sh@936 -- # '[' -z 1982242 ']' 00:28:45.597 05:43:48 -- common/autotest_common.sh@940 -- # kill -0 1982242 00:28:45.597 05:43:48 -- common/autotest_common.sh@941 -- # uname 00:28:45.597 05:43:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:45.597 05:43:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1982242 00:28:45.858 05:43:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:45.858 05:43:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:45.858 05:43:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1982242' 00:28:45.858 killing process with pid 1982242 00:28:45.858 05:43:48 -- common/autotest_common.sh@955 -- # kill 1982242 00:28:45.858 05:43:48 -- common/autotest_common.sh@960 -- # wait 1982242 00:28:45.858 05:43:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:45.858 05:43:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:45.858 05:43:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:45.858 05:43:48 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:45.858 05:43:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:45.858 05:43:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.858 05:43:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:45.858 05:43:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.406 05:43:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:48.406 00:28:48.406 real 0m33.242s 00:28:48.406 user 2m43.973s 00:28:48.406 sys 0m9.847s 00:28:48.406 05:43:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:48.406 05:43:51 -- common/autotest_common.sh@10 -- # set +x 00:28:48.406 ************************************ 00:28:48.406 END TEST nvmf_fio_host 00:28:48.406 ************************************ 00:28:48.406 05:43:51 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:48.406 05:43:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:48.406 05:43:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:48.406 05:43:51 -- common/autotest_common.sh@10 -- # set +x 00:28:48.406 ************************************ 00:28:48.406 START TEST nvmf_failover 00:28:48.406 ************************************ 00:28:48.406 05:43:51 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:48.406 * Looking for test storage... 00:28:48.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:48.406 05:43:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:48.406 05:43:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:48.406 05:43:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:48.406 05:43:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:48.406 05:43:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:48.406 05:43:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:48.406 05:43:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:48.406 05:43:51 -- scripts/common.sh@335 -- # IFS=.-: 00:28:48.406 05:43:51 -- scripts/common.sh@335 -- # read -ra ver1 00:28:48.406 05:43:51 -- scripts/common.sh@336 -- # IFS=.-: 00:28:48.406 05:43:51 -- scripts/common.sh@336 -- # read -ra ver2 00:28:48.406 05:43:51 -- scripts/common.sh@337 -- # local 'op=<' 00:28:48.406 05:43:51 -- scripts/common.sh@339 -- # ver1_l=2 00:28:48.407 05:43:51 -- scripts/common.sh@340 -- # ver2_l=1 00:28:48.407 05:43:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:48.407 05:43:51 -- scripts/common.sh@343 -- # case "$op" in 00:28:48.407 05:43:51 -- scripts/common.sh@344 -- # : 1 00:28:48.407 05:43:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:48.407 05:43:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:48.407 05:43:51 -- scripts/common.sh@364 -- # decimal 1 00:28:48.407 05:43:51 -- scripts/common.sh@352 -- # local d=1 00:28:48.407 05:43:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:48.407 05:43:51 -- scripts/common.sh@354 -- # echo 1 00:28:48.407 05:43:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:48.407 05:43:51 -- scripts/common.sh@365 -- # decimal 2 00:28:48.407 05:43:51 -- scripts/common.sh@352 -- # local d=2 00:28:48.407 05:43:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:48.407 05:43:51 -- scripts/common.sh@354 -- # echo 2 00:28:48.407 05:43:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:48.407 05:43:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:48.407 05:43:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:48.407 05:43:51 -- scripts/common.sh@367 -- # return 0 00:28:48.407 05:43:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:48.407 05:43:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:48.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.407 --rc genhtml_branch_coverage=1 00:28:48.407 --rc genhtml_function_coverage=1 00:28:48.407 --rc genhtml_legend=1 00:28:48.407 --rc geninfo_all_blocks=1 00:28:48.407 --rc geninfo_unexecuted_blocks=1 00:28:48.407 00:28:48.407 ' 00:28:48.407 05:43:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:48.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.407 --rc genhtml_branch_coverage=1 00:28:48.407 --rc genhtml_function_coverage=1 00:28:48.407 --rc genhtml_legend=1 00:28:48.407 --rc geninfo_all_blocks=1 00:28:48.407 --rc geninfo_unexecuted_blocks=1 00:28:48.407 00:28:48.407 ' 00:28:48.407 05:43:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:48.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.407 --rc genhtml_branch_coverage=1 00:28:48.407 --rc genhtml_function_coverage=1 00:28:48.407 --rc genhtml_legend=1 00:28:48.407 --rc geninfo_all_blocks=1 00:28:48.407 --rc geninfo_unexecuted_blocks=1 00:28:48.407 00:28:48.407 ' 00:28:48.407 05:43:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:48.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.407 --rc genhtml_branch_coverage=1 00:28:48.407 --rc genhtml_function_coverage=1 00:28:48.407 --rc genhtml_legend=1 00:28:48.407 --rc geninfo_all_blocks=1 00:28:48.407 --rc geninfo_unexecuted_blocks=1 00:28:48.407 00:28:48.407 ' 00:28:48.407 05:43:51 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:48.407 05:43:51 -- nvmf/common.sh@7 -- # uname -s 00:28:48.407 05:43:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:48.407 05:43:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:48.407 05:43:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:48.407 05:43:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:48.407 05:43:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:48.407 05:43:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:48.407 05:43:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:48.407 05:43:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:48.407 05:43:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:48.407 05:43:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:48.407 05:43:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:48.407 05:43:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:48.407 05:43:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:48.407 05:43:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:48.407 05:43:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:48.407 05:43:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:48.407 05:43:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:48.407 05:43:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:48.407 05:43:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:48.407 05:43:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.407 05:43:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.407 05:43:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.407 05:43:51 -- paths/export.sh@5 -- # export PATH 00:28:48.407 05:43:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.407 05:43:51 -- nvmf/common.sh@46 -- # : 0 00:28:48.407 05:43:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:48.407 05:43:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:48.407 05:43:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:48.407 05:43:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:48.407 05:43:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:48.407 05:43:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:48.407 05:43:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:48.407 05:43:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:48.407 05:43:51 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:48.407 05:43:51 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:48.407 05:43:51 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:48.407 05:43:51 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:48.407 05:43:51 -- host/failover.sh@18 -- # nvmftestinit 00:28:48.407 05:43:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:48.407 05:43:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:48.407 05:43:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:48.407 05:43:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:48.407 05:43:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:48.407 05:43:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.407 05:43:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:48.407 05:43:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.407 05:43:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:48.407 05:43:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:48.407 05:43:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:48.407 05:43:51 -- common/autotest_common.sh@10 -- # set +x 00:28:56.560 05:43:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:56.561 05:43:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:56.561 05:43:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:56.561 05:43:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:56.561 05:43:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:56.561 05:43:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:56.561 05:43:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:56.561 05:43:58 -- nvmf/common.sh@294 -- # net_devs=() 00:28:56.561 05:43:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:56.561 05:43:58 -- nvmf/common.sh@295 -- # e810=() 00:28:56.561 05:43:58 -- nvmf/common.sh@295 -- # local -ga e810 00:28:56.561 05:43:58 -- nvmf/common.sh@296 -- # x722=() 00:28:56.561 05:43:58 -- nvmf/common.sh@296 -- # local -ga x722 00:28:56.561 05:43:58 -- nvmf/common.sh@297 -- # mlx=() 00:28:56.561 05:43:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:56.561 05:43:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:56.561 05:43:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:56.561 05:43:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:56.561 05:43:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:56.561 05:43:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:56.561 05:43:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:56.561 05:43:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:56.561 05:43:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:56.561 05:43:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:56.561 05:43:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:56.561 05:43:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:56.561 05:43:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:56.561 05:43:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:56.561 05:43:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:56.561 05:43:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:56.561 05:43:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:56.561 05:43:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:56.561 05:43:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:56.561 05:43:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:56.561 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:56.561 05:43:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:56.561 05:43:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:56.561 05:43:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.561 05:43:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.561 05:43:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:56.561 05:43:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:56.561 05:43:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:56.561 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:56.561 05:43:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:56.561 05:43:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:56.561 05:43:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.561 05:43:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.561 05:43:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:56.561 05:43:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:56.561 05:43:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:56.561 05:43:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:56.561 05:43:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:56.561 05:43:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.561 05:43:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:56.561 05:43:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.561 05:43:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:56.561 Found net devices under 0000:31:00.0: cvl_0_0 00:28:56.561 05:43:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.561 05:43:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:56.561 05:43:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.561 05:43:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:56.561 05:43:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.561 05:43:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:56.561 Found net devices under 0000:31:00.1: cvl_0_1 00:28:56.561 05:43:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.561 05:43:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:56.561 05:43:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:56.561 05:43:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:56.561 05:43:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:56.561 05:43:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:56.561 05:43:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:56.561 05:43:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:56.561 05:43:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:56.561 05:43:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:56.561 05:43:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:56.561 05:43:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:56.561 05:43:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:56.561 05:43:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:56.561 05:43:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:56.561 05:43:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:56.561 05:43:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:56.561 05:43:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:56.561 05:43:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:56.561 05:43:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:56.561 05:43:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:56.561 05:43:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:56.561 05:43:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:56.561 05:43:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:56.561 05:43:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:56.561 05:43:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:56.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:56.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:28:56.561 00:28:56.561 --- 10.0.0.2 ping statistics --- 00:28:56.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.561 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:28:56.561 05:43:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:56.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:56.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:28:56.561 00:28:56.561 --- 10.0.0.1 ping statistics --- 00:28:56.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.561 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:28:56.561 05:43:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:56.561 05:43:58 -- nvmf/common.sh@410 -- # return 0 00:28:56.561 05:43:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:56.561 05:43:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:56.561 05:43:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:56.561 05:43:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:56.561 05:43:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:56.561 05:43:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:56.561 05:43:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:56.561 05:43:58 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:28:56.561 05:43:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:56.561 05:43:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:56.561 05:43:58 -- common/autotest_common.sh@10 -- # set +x 00:28:56.561 05:43:58 -- nvmf/common.sh@469 -- # nvmfpid=1991770 00:28:56.561 05:43:58 -- nvmf/common.sh@470 -- # waitforlisten 1991770 00:28:56.561 05:43:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:56.561 05:43:58 -- common/autotest_common.sh@829 -- # '[' -z 1991770 ']' 00:28:56.561 05:43:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:56.561 05:43:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:56.561 05:43:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:56.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:56.561 05:43:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:56.561 05:43:58 -- common/autotest_common.sh@10 -- # set +x 00:28:56.561 [2024-12-07 05:43:59.001545] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:56.561 [2024-12-07 05:43:59.001607] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:56.561 EAL: No free 2048 kB hugepages reported on node 1 00:28:56.561 [2024-12-07 05:43:59.092801] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:56.561 [2024-12-07 05:43:59.185291] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:56.561 [2024-12-07 05:43:59.185456] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:56.561 [2024-12-07 05:43:59.185468] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:56.561 [2024-12-07 05:43:59.185478] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:56.561 [2024-12-07 05:43:59.185616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:56.561 [2024-12-07 05:43:59.185783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:56.561 [2024-12-07 05:43:59.185783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:56.822 05:43:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:56.822 05:43:59 -- common/autotest_common.sh@862 -- # return 0 00:28:56.822 05:43:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:56.822 05:43:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:56.822 05:43:59 -- common/autotest_common.sh@10 -- # set +x 00:28:56.822 05:43:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:56.822 05:43:59 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:56.822 [2024-12-07 05:43:59.984083] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:56.822 05:44:00 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:57.082 Malloc0 00:28:57.082 05:44:00 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:57.343 05:44:00 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:57.343 05:44:00 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:57.603 [2024-12-07 05:44:00.678494] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:57.603 05:44:00 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:57.863 [2024-12-07 05:44:00.846934] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:57.863 05:44:00 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:57.863 [2024-12-07 05:44:01.011474] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:57.863 05:44:01 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:28:57.863 05:44:01 -- host/failover.sh@31 -- # bdevperf_pid=1992142 00:28:57.863 05:44:01 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:57.863 05:44:01 -- host/failover.sh@34 -- # waitforlisten 1992142 /var/tmp/bdevperf.sock 00:28:57.863 05:44:01 -- common/autotest_common.sh@829 -- # '[' -z 1992142 ']' 00:28:57.863 05:44:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:57.863 05:44:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:57.863 05:44:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:57.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:57.863 05:44:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:57.863 05:44:01 -- common/autotest_common.sh@10 -- # set +x 00:28:58.805 05:44:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:58.805 05:44:01 -- common/autotest_common.sh@862 -- # return 0 00:28:58.805 05:44:01 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:59.067 NVMe0n1 00:28:59.067 05:44:02 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:59.329 00:28:59.329 05:44:02 -- host/failover.sh@39 -- # run_test_pid=1992484 00:28:59.329 05:44:02 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:59.329 05:44:02 -- host/failover.sh@41 -- # sleep 1 00:29:00.716 05:44:03 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:00.716 [2024-12-07 05:44:03.664879] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.664919] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.664925] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.664930] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.664935] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.664940] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.664945] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.664949] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.664954] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.664958] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.664963] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.664967] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.664972] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.664976] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.664986] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.664991] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.664995] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665000] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665004] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665008] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665018] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665023] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665028] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665032] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665036] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665041] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665045] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665050] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665054] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665059] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665063] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665067] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665072] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665076] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665081] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665085] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665090] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665094] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665099] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665103] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665107] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665112] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665118] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665122] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665127] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665131] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665136] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665141] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665146] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665150] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665155] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665159] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665164] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665169] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665173] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665178] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665182] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665187] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665191] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665195] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665200] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665205] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665209] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665214] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665218] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665223] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665229] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665233] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665238] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 [2024-12-07 05:44:03.665244] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7610 is same with the state(5) to be set 00:29:00.716 05:44:03 -- host/failover.sh@45 -- # sleep 3 00:29:04.018 05:44:06 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:04.018 00:29:04.018 05:44:06 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:04.018 [2024-12-07 05:44:07.094747] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094784] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094789] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094794] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094799] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094803] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094808] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094812] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094817] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094822] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094826] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094831] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094836] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094840] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094844] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094849] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094853] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094858] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094862] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094867] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094871] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094876] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094880] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094891] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094895] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094899] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094904] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094908] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094913] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094917] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094922] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094926] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094930] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094935] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094940] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094944] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094949] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.018 [2024-12-07 05:44:07.094953] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.019 [2024-12-07 05:44:07.094958] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.019 [2024-12-07 05:44:07.094962] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.019 [2024-12-07 05:44:07.094966] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.019 [2024-12-07 05:44:07.094971] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.019 [2024-12-07 05:44:07.094975] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.019 [2024-12-07 05:44:07.094980] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.019 [2024-12-07 05:44:07.094985] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.019 [2024-12-07 05:44:07.094989] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.019 [2024-12-07 05:44:07.094994] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.019 [2024-12-07 05:44:07.094998] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.019 [2024-12-07 05:44:07.095003] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.019 [2024-12-07 05:44:07.095007] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.019 [2024-12-07 05:44:07.095018] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.019 [2024-12-07 05:44:07.095023] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.019 [2024-12-07 05:44:07.095027] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.019 [2024-12-07 05:44:07.095032] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.019 [2024-12-07 05:44:07.095036] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.019 [2024-12-07 05:44:07.095041] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(5) to be set 00:29:04.019 05:44:07 -- host/failover.sh@50 -- # sleep 3 00:29:07.323 05:44:10 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:07.323 [2024-12-07 05:44:10.276215] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:07.323 05:44:10 -- host/failover.sh@55 -- # sleep 1 00:29:08.270 05:44:11 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:08.270 [2024-12-07 05:44:11.454869] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.270 [2024-12-07 05:44:11.454908] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.270 [2024-12-07 05:44:11.454914] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.270 [2024-12-07 05:44:11.454919] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.270 [2024-12-07 05:44:11.454924] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.270 [2024-12-07 05:44:11.454929] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.270 [2024-12-07 05:44:11.454934] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.270 [2024-12-07 05:44:11.454938] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.270 [2024-12-07 05:44:11.454943] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.270 [2024-12-07 05:44:11.454948] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.270 [2024-12-07 05:44:11.454952] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.270 [2024-12-07 05:44:11.454957] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.270 [2024-12-07 05:44:11.454961] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.270 [2024-12-07 05:44:11.454966] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.270 [2024-12-07 05:44:11.454970] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.270 [2024-12-07 05:44:11.454975] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.270 [2024-12-07 05:44:11.454979] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.270 [2024-12-07 05:44:11.454983] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.270 [2024-12-07 05:44:11.454997] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.270 [2024-12-07 05:44:11.455002] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.270 [2024-12-07 05:44:11.455006] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.270 [2024-12-07 05:44:11.455016] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.270 [2024-12-07 05:44:11.455021] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.270 [2024-12-07 05:44:11.455026] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.270 [2024-12-07 05:44:11.455030] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.270 [2024-12-07 05:44:11.455035] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.270 [2024-12-07 05:44:11.455039] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.270 [2024-12-07 05:44:11.455043] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.270 [2024-12-07 05:44:11.455048] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.270 [2024-12-07 05:44:11.455052] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.271 [2024-12-07 05:44:11.455057] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.271 [2024-12-07 05:44:11.455061] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.271 [2024-12-07 05:44:11.455065] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.271 [2024-12-07 05:44:11.455070] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.271 [2024-12-07 05:44:11.455075] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.271 [2024-12-07 05:44:11.455079] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.271 [2024-12-07 05:44:11.455084] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.271 [2024-12-07 05:44:11.455088] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.271 [2024-12-07 05:44:11.455093] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.271 [2024-12-07 05:44:11.455097] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.271 [2024-12-07 05:44:11.455102] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.271 [2024-12-07 05:44:11.455106] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.271 [2024-12-07 05:44:11.455111] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.271 [2024-12-07 05:44:11.455115] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.271 [2024-12-07 05:44:11.455120] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.271 [2024-12-07 05:44:11.455125] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.271 [2024-12-07 05:44:11.455130] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.271 [2024-12-07 05:44:11.455135] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.271 [2024-12-07 05:44:11.455140] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.271 [2024-12-07 05:44:11.455144] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.271 [2024-12-07 05:44:11.455149] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.271 [2024-12-07 05:44:11.455153] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.271 [2024-12-07 05:44:11.455158] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.271 [2024-12-07 05:44:11.455162] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.271 [2024-12-07 05:44:11.455167] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.271 [2024-12-07 05:44:11.455171] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.271 [2024-12-07 05:44:11.455176] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec91b0 is same with the state(5) to be set 00:29:08.271 05:44:11 -- host/failover.sh@59 -- # wait 1992484 00:29:14.865 0 00:29:14.865 05:44:17 -- host/failover.sh@61 -- # killprocess 1992142 00:29:14.865 05:44:17 -- common/autotest_common.sh@936 -- # '[' -z 1992142 ']' 00:29:14.865 05:44:17 -- common/autotest_common.sh@940 -- # kill -0 1992142 00:29:14.865 05:44:17 -- common/autotest_common.sh@941 -- # uname 00:29:14.865 05:44:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:14.865 05:44:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1992142 00:29:14.865 05:44:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:14.865 05:44:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:14.865 05:44:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1992142' 00:29:14.865 killing process with pid 1992142 00:29:14.865 05:44:17 -- common/autotest_common.sh@955 -- # kill 1992142 00:29:14.865 05:44:17 -- common/autotest_common.sh@960 -- # wait 1992142 00:29:14.865 05:44:17 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:14.865 [2024-12-07 05:44:01.075723] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:14.865 [2024-12-07 05:44:01.075781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1992142 ] 00:29:14.865 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.865 [2024-12-07 05:44:01.136340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.865 [2024-12-07 05:44:01.198461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.865 Running I/O for 15 seconds... 00:29:14.865 [2024-12-07 05:44:03.665403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:39920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.865 [2024-12-07 05:44:03.665441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.865 [2024-12-07 05:44:03.665459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.665468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.665478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.665486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.665496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:39944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.665505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.665514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.665522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.665531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.665539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.665548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.665556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.665565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:39336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.665573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.665582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.665589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.665599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.665606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.665615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:39416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.665623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.665638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:39424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.665645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.665655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.665662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.665671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.665679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.665688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.665695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.665705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.665712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.665722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:39992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.665729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.665739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:40000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.665746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.665756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.665763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.665772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.665780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.665789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:40040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.665796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.665806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:40056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.665813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.665822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:40064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.665829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.665839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:40080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.665848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.665857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:40096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.665865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.665874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.665883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.665892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.665899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.665909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.665916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.665926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:39552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.665933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.665942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.665950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.665959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.665966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.665975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:39592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.665983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.665992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.666000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:40104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.666023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.666040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:40136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.666057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:40168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.666075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:40176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.666092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:40184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.666109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:40192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.666126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.666143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:40224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.666160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.866 [2024-12-07 05:44:03.666177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.666194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:40248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.666211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:40256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.866 [2024-12-07 05:44:03.666228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:40264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.866 [2024-12-07 05:44:03.666244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:39616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.666261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.666279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:39648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.666296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:39656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.666313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.666329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.666346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:39688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.666367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:39696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.666384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:40272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.866 [2024-12-07 05:44:03.666400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.666417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:40288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.866 [2024-12-07 05:44:03.666433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:40296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.666450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:40304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.866 [2024-12-07 05:44:03.666466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:40312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.866 [2024-12-07 05:44:03.666483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:40320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.866 [2024-12-07 05:44:03.666501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.866 [2024-12-07 05:44:03.666518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.666535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.666552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:40352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.666569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:40360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.666585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:40368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.866 [2024-12-07 05:44:03.666602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:40376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.666618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.666635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:39768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.666651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:39776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.666667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.666684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:39800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.666700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:39808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.666718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.866 [2024-12-07 05:44:03.666728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.866 [2024-12-07 05:44:03.666735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.666744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:39864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.666751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.666760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:40384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.666767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.666777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:40392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.867 [2024-12-07 05:44:03.666784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.666794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:40400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.666801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.666810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.666818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.666827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:40416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.666834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.666843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:40424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.666850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.666860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:40432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.666867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.666876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:40440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.666883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.666893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.666902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.666912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:40456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.666924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.666934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:40464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.666941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.666950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:40472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.666957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.666966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:40480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.867 [2024-12-07 05:44:03.666974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.666983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:40488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.867 [2024-12-07 05:44:03.666990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.666999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:40496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.867 [2024-12-07 05:44:03.667006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:39912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.667027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:39976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.667044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:40008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.667060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:40032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.667076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:40048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.667094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:40072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.667111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:40088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.667127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:40504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.667146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:40512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.667162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:40520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.667178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:40528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.667195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.667211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.667227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.667244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:40560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.667260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:40568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.867 [2024-12-07 05:44:03.667276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:40576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.867 [2024-12-07 05:44:03.667293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:40584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.667309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:40592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.667325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:40600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.667343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:40608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.867 [2024-12-07 05:44:03.667359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.667376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:40624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.667392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:40632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.867 [2024-12-07 05:44:03.667408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.667425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.867 [2024-12-07 05:44:03.667443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.667459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.667475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:40672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.867 [2024-12-07 05:44:03.667492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.667508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:40128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.667524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:40144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.667541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:40152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.667559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.667575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:03.667592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667601] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa730c0 is same with the state(5) to be set 00:29:14.867 [2024-12-07 05:44:03.667610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:14.867 [2024-12-07 05:44:03.667616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:14.867 [2024-12-07 05:44:03.667624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40216 len:8 PRP1 0x0 PRP2 0x0 00:29:14.867 [2024-12-07 05:44:03.667631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667667] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa730c0 was disconnected and freed. reset controller. 00:29:14.867 [2024-12-07 05:44:03.667683] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:14.867 [2024-12-07 05:44:03.667704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:14.867 [2024-12-07 05:44:03.667712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:14.867 [2024-12-07 05:44:03.667728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:14.867 [2024-12-07 05:44:03.667743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:14.867 [2024-12-07 05:44:03.667758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:03.667765] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.867 [2024-12-07 05:44:03.670141] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.867 [2024-12-07 05:44:03.670170] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa53f40 (9): Bad file descriptor 00:29:14.867 [2024-12-07 05:44:03.699751] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:14.867 [2024-12-07 05:44:07.095344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:72376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:07.095379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:07.095396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:07.095409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:07.095420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:07.095428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:07.095437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:72416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:07.095445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:07.095454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:07.095461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:07.095471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:07.095478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:07.095487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:07.095494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:07.095504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:07.095511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:07.095520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:07.095527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:07.095537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:07.095544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:07.095553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:07.095560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:07.095570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:07.095577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:07.095586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:07.095593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:07.095603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.867 [2024-12-07 05:44:07.095610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.867 [2024-12-07 05:44:07.095620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.095628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.095637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.095645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.095654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.095661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.095670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.095678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.095687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.095695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.095704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.095711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.095720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.095728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.095737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.095744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.095753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.095761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.095771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.095778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.095787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.095794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.095803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.095811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.095820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.095828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.095838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.095845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.095854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.095862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.095872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.095879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.095888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.095896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.095905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.095912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.095921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.095930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.095939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.095947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.095956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.095963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.095973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.868 [2024-12-07 05:44:07.095980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.095989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.868 [2024-12-07 05:44:07.095997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.868 [2024-12-07 05:44:07.096020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.868 [2024-12-07 05:44:07.096036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.096055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.868 [2024-12-07 05:44:07.096071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.868 [2024-12-07 05:44:07.096088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.096104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.868 [2024-12-07 05:44:07.096120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.096137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.868 [2024-12-07 05:44:07.096153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.096170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.096187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:72728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.096203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.868 [2024-12-07 05:44:07.096220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.096237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.868 [2024-12-07 05:44:07.096253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.096271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.096287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.096303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:72224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.096320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.096337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:72240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.096353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.096369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.096386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.868 [2024-12-07 05:44:07.096402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.096419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.868 [2024-12-07 05:44:07.096435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.868 [2024-12-07 05:44:07.096451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.868 [2024-12-07 05:44:07.096469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.096486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.096503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.868 [2024-12-07 05:44:07.096519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.096536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.096552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.096568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:72848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.096585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.868 [2024-12-07 05:44:07.096602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.096618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:72872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.096635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.868 [2024-12-07 05:44:07.096651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.096667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.868 [2024-12-07 05:44:07.096686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.868 [2024-12-07 05:44:07.096702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.868 [2024-12-07 05:44:07.096718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.868 [2024-12-07 05:44:07.096734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.868 [2024-12-07 05:44:07.096750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.868 [2024-12-07 05:44:07.096766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.868 [2024-12-07 05:44:07.096776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.868 [2024-12-07 05:44:07.096783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.096792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:07.096799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.096808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:72280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:07.096815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.096825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:07.096832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.096841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:07.096852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.096861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:07.096868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.096878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:07.096887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.096896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:07.096903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.096913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:72360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:07.096920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.096929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:07.096936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.096945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:07.096952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.096961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.869 [2024-12-07 05:44:07.096969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.096978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.869 [2024-12-07 05:44:07.096985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.096994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:07.097002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.869 [2024-12-07 05:44:07.097023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.869 [2024-12-07 05:44:07.097039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.869 [2024-12-07 05:44:07.097056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.869 [2024-12-07 05:44:07.097073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.869 [2024-12-07 05:44:07.097089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.869 [2024-12-07 05:44:07.097107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:73040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:07.097123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:07.097140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:07.097157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:07.097173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:72440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:07.097190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:07.097207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:07.097223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:07.097239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:07.097256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:73048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:07.097273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.869 [2024-12-07 05:44:07.097289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.869 [2024-12-07 05:44:07.097306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:73072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:07.097327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:73080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:07.097343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:73088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:07.097360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:07.097376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:73104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:07.097394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:73112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:07.097412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.869 [2024-12-07 05:44:07.097428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:73128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:07.097445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:07.097462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:07.097478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:07.097494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:07.097511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:14.869 [2024-12-07 05:44:07.097542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:14.869 [2024-12-07 05:44:07.097549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72584 len:8 PRP1 0x0 PRP2 0x0 00:29:14.869 [2024-12-07 05:44:07.097557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097594] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa60390 was disconnected and freed. reset controller. 00:29:14.869 [2024-12-07 05:44:07.097604] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:29:14.869 [2024-12-07 05:44:07.097623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:14.869 [2024-12-07 05:44:07.097631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:14.869 [2024-12-07 05:44:07.097647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:14.869 [2024-12-07 05:44:07.097663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:14.869 [2024-12-07 05:44:07.097678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:07.097685] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.869 [2024-12-07 05:44:07.097710] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa53f40 (9): Bad file descriptor 00:29:14.869 [2024-12-07 05:44:07.099994] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.869 [2024-12-07 05:44:07.183243] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:14.869 [2024-12-07 05:44:11.455569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:11.455605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:11.455624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:11.455633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:11.455643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:11.455650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:11.455660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:11.455667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:11.455677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:11.455684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:11.455699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:11.455706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:11.455715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:11.455723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:11.455732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:11.455739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:11.455748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:11.455756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:11.455765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:11.455772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:11.455781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:11.455788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:11.455798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:11.455805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:11.455814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:11.455822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:11.455831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:11.455838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:11.455847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:11.455855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:11.455864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:11.455871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:11.455880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:11.455888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:11.455897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:11.455906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:11.455915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:11.455922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:11.455931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:11.455939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:11.455948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:11.455955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:11.455964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:11.455971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:11.455980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:11.455988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:11.455997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.869 [2024-12-07 05:44:11.456004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.869 [2024-12-07 05:44:11.456020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.870 [2024-12-07 05:44:11.456213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.870 [2024-12-07 05:44:11.456279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.870 [2024-12-07 05:44:11.456295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.870 [2024-12-07 05:44:11.456462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.870 [2024-12-07 05:44:11.456495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.870 [2024-12-07 05:44:11.456511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.870 [2024-12-07 05:44:11.456544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.870 [2024-12-07 05:44:11.456561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.870 [2024-12-07 05:44:11.456577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.870 [2024-12-07 05:44:11.456593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.870 [2024-12-07 05:44:11.456660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.870 [2024-12-07 05:44:11.456676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.870 [2024-12-07 05:44:11.456693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.870 [2024-12-07 05:44:11.456709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.456988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.456995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.457005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.457017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.457026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.457033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.457042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.870 [2024-12-07 05:44:11.457050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.457059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.457066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.457075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.870 [2024-12-07 05:44:11.457082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.457092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.457100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.457110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.870 [2024-12-07 05:44:11.457117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.457126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.870 [2024-12-07 05:44:11.457133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.457143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.457150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.457159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.870 [2024-12-07 05:44:11.457166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.457175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.457185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.870 [2024-12-07 05:44:11.457194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.870 [2024-12-07 05:44:11.457201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.871 [2024-12-07 05:44:11.457218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.871 [2024-12-07 05:44:11.457235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.871 [2024-12-07 05:44:11.457251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.871 [2024-12-07 05:44:11.457269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.871 [2024-12-07 05:44:11.457285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.871 [2024-12-07 05:44:11.457302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.871 [2024-12-07 05:44:11.457318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.871 [2024-12-07 05:44:11.457335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.871 [2024-12-07 05:44:11.457351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.871 [2024-12-07 05:44:11.457367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.871 [2024-12-07 05:44:11.457384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.871 [2024-12-07 05:44:11.457402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.871 [2024-12-07 05:44:11.457419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.871 [2024-12-07 05:44:11.457435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.871 [2024-12-07 05:44:11.457452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.871 [2024-12-07 05:44:11.457468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.871 [2024-12-07 05:44:11.457484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.871 [2024-12-07 05:44:11.457501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.871 [2024-12-07 05:44:11.457517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.871 [2024-12-07 05:44:11.457534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.871 [2024-12-07 05:44:11.457550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.871 [2024-12-07 05:44:11.457566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.871 [2024-12-07 05:44:11.457582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.871 [2024-12-07 05:44:11.457600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.871 [2024-12-07 05:44:11.457616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.871 [2024-12-07 05:44:11.457633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.871 [2024-12-07 05:44:11.457651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.871 [2024-12-07 05:44:11.457667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.871 [2024-12-07 05:44:11.457683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.871 [2024-12-07 05:44:11.457700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.871 [2024-12-07 05:44:11.457716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.871 [2024-12-07 05:44:11.457732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:14.871 [2024-12-07 05:44:11.457764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:14.871 [2024-12-07 05:44:11.457771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12464 len:8 PRP1 0x0 PRP2 0x0 00:29:14.871 [2024-12-07 05:44:11.457780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457821] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa74fd0 was disconnected and freed. reset controller. 00:29:14.871 [2024-12-07 05:44:11.457831] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:29:14.871 [2024-12-07 05:44:11.457851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:14.871 [2024-12-07 05:44:11.457859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:14.871 [2024-12-07 05:44:11.457877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:14.871 [2024-12-07 05:44:11.457893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:14.871 [2024-12-07 05:44:11.457908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.871 [2024-12-07 05:44:11.457916] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.871 [2024-12-07 05:44:11.457946] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa53f40 (9): Bad file descriptor 00:29:14.871 [2024-12-07 05:44:11.460103] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.871 [2024-12-07 05:44:11.615538] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:14.871 00:29:14.871 Latency(us) 00:29:14.871 [2024-12-07T04:44:18.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.871 [2024-12-07T04:44:18.111Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:14.871 Verification LBA range: start 0x0 length 0x4000 00:29:14.871 NVMe0n1 : 15.00 19777.40 77.26 1001.71 0.00 6144.24 515.41 12888.75 00:29:14.871 [2024-12-07T04:44:18.111Z] =================================================================================================================== 00:29:14.871 [2024-12-07T04:44:18.111Z] Total : 19777.40 77.26 1001.71 0.00 6144.24 515.41 12888.75 00:29:14.871 Received shutdown signal, test time was about 15.000000 seconds 00:29:14.871 00:29:14.871 Latency(us) 00:29:14.871 [2024-12-07T04:44:18.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.871 [2024-12-07T04:44:18.111Z] =================================================================================================================== 00:29:14.871 [2024-12-07T04:44:18.111Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:14.871 05:44:17 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:29:14.871 05:44:17 -- host/failover.sh@65 -- # count=3 00:29:14.871 05:44:17 -- host/failover.sh@67 -- # (( count != 3 )) 00:29:14.871 05:44:17 -- host/failover.sh@73 -- # bdevperf_pid=1995383 00:29:14.871 05:44:17 -- host/failover.sh@75 -- # waitforlisten 1995383 /var/tmp/bdevperf.sock 00:29:14.871 05:44:17 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:29:14.871 05:44:17 -- common/autotest_common.sh@829 -- # '[' -z 1995383 ']' 00:29:14.871 05:44:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:14.871 05:44:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:14.871 05:44:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:14.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:14.871 05:44:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:14.871 05:44:17 -- common/autotest_common.sh@10 -- # set +x 00:29:15.813 05:44:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:15.813 05:44:18 -- common/autotest_common.sh@862 -- # return 0 00:29:15.813 05:44:18 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:15.813 [2024-12-07 05:44:18.838168] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:15.813 05:44:18 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:15.813 [2024-12-07 05:44:19.014633] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:15.813 05:44:19 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:16.384 NVMe0n1 00:29:16.384 05:44:19 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:16.645 00:29:16.645 05:44:19 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:16.905 00:29:16.905 05:44:20 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:16.905 05:44:20 -- host/failover.sh@82 -- # grep -q NVMe0 00:29:17.166 05:44:20 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:17.427 05:44:20 -- host/failover.sh@87 -- # sleep 3 00:29:20.726 05:44:23 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:20.726 05:44:23 -- host/failover.sh@88 -- # grep -q NVMe0 00:29:20.726 05:44:23 -- host/failover.sh@90 -- # run_test_pid=1996560 00:29:20.726 05:44:23 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:20.726 05:44:23 -- host/failover.sh@92 -- # wait 1996560 00:29:21.669 0 00:29:21.669 05:44:24 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:21.669 [2024-12-07 05:44:17.922456] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:21.669 [2024-12-07 05:44:17.922536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1995383 ] 00:29:21.669 EAL: No free 2048 kB hugepages reported on node 1 00:29:21.669 [2024-12-07 05:44:17.985357] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.669 [2024-12-07 05:44:18.046673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:21.669 [2024-12-07 05:44:20.409554] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:21.669 [2024-12-07 05:44:20.409605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.669 [2024-12-07 05:44:20.409617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.669 [2024-12-07 05:44:20.409626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.669 [2024-12-07 05:44:20.409634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.669 [2024-12-07 05:44:20.409642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.669 [2024-12-07 05:44:20.409649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.669 [2024-12-07 05:44:20.409657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.669 [2024-12-07 05:44:20.409665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.669 [2024-12-07 05:44:20.409672] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.669 [2024-12-07 05:44:20.409696] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.669 [2024-12-07 05:44:20.409710] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x911f40 (9): Bad file descriptor 00:29:21.669 [2024-12-07 05:44:20.501684] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:21.669 Running I/O for 1 seconds... 00:29:21.669 00:29:21.669 Latency(us) 00:29:21.669 [2024-12-07T04:44:24.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.669 [2024-12-07T04:44:24.909Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:21.669 Verification LBA range: start 0x0 length 0x4000 00:29:21.669 NVMe0n1 : 1.00 20083.04 78.45 0.00 0.00 6343.88 1351.68 7809.71 00:29:21.670 [2024-12-07T04:44:24.910Z] =================================================================================================================== 00:29:21.670 [2024-12-07T04:44:24.910Z] Total : 20083.04 78.45 0.00 0.00 6343.88 1351.68 7809.71 00:29:21.670 05:44:24 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:21.670 05:44:24 -- host/failover.sh@95 -- # grep -q NVMe0 00:29:21.934 05:44:24 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:21.934 05:44:25 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:21.934 05:44:25 -- host/failover.sh@99 -- # grep -q NVMe0 00:29:22.197 05:44:25 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:22.197 05:44:25 -- host/failover.sh@101 -- # sleep 3 00:29:25.708 05:44:28 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:25.708 05:44:28 -- host/failover.sh@103 -- # grep -q NVMe0 00:29:25.708 05:44:28 -- host/failover.sh@108 -- # killprocess 1995383 00:29:25.708 05:44:28 -- common/autotest_common.sh@936 -- # '[' -z 1995383 ']' 00:29:25.708 05:44:28 -- common/autotest_common.sh@940 -- # kill -0 1995383 00:29:25.708 05:44:28 -- common/autotest_common.sh@941 -- # uname 00:29:25.708 05:44:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:25.708 05:44:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1995383 00:29:25.708 05:44:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:25.708 05:44:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:25.708 05:44:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1995383' 00:29:25.708 killing process with pid 1995383 00:29:25.708 05:44:28 -- common/autotest_common.sh@955 -- # kill 1995383 00:29:25.708 05:44:28 -- common/autotest_common.sh@960 -- # wait 1995383 00:29:25.708 05:44:28 -- host/failover.sh@110 -- # sync 00:29:25.708 05:44:28 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:25.968 05:44:28 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:29:25.968 05:44:28 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:25.968 05:44:28 -- host/failover.sh@116 -- # nvmftestfini 00:29:25.968 05:44:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:25.968 05:44:28 -- nvmf/common.sh@116 -- # sync 00:29:25.968 05:44:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:25.968 05:44:28 -- nvmf/common.sh@119 -- # set +e 00:29:25.968 05:44:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:25.968 05:44:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:25.968 rmmod nvme_tcp 00:29:25.968 rmmod nvme_fabrics 00:29:25.968 rmmod nvme_keyring 00:29:25.968 05:44:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:25.969 05:44:29 -- nvmf/common.sh@123 -- # set -e 00:29:25.969 05:44:29 -- nvmf/common.sh@124 -- # return 0 00:29:25.969 05:44:29 -- nvmf/common.sh@477 -- # '[' -n 1991770 ']' 00:29:25.969 05:44:29 -- nvmf/common.sh@478 -- # killprocess 1991770 00:29:25.969 05:44:29 -- common/autotest_common.sh@936 -- # '[' -z 1991770 ']' 00:29:25.969 05:44:29 -- common/autotest_common.sh@940 -- # kill -0 1991770 00:29:25.969 05:44:29 -- common/autotest_common.sh@941 -- # uname 00:29:25.969 05:44:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:25.969 05:44:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1991770 00:29:25.969 05:44:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:25.969 05:44:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:25.969 05:44:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1991770' 00:29:25.969 killing process with pid 1991770 00:29:25.969 05:44:29 -- common/autotest_common.sh@955 -- # kill 1991770 00:29:25.969 05:44:29 -- common/autotest_common.sh@960 -- # wait 1991770 00:29:26.229 05:44:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:26.229 05:44:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:26.229 05:44:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:26.229 05:44:29 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:26.229 05:44:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:26.229 05:44:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.229 05:44:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:26.229 05:44:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.144 05:44:31 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:28.144 00:29:28.144 real 0m40.207s 00:29:28.144 user 2m3.058s 00:29:28.144 sys 0m8.558s 00:29:28.144 05:44:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:28.144 05:44:31 -- common/autotest_common.sh@10 -- # set +x 00:29:28.144 ************************************ 00:29:28.144 END TEST nvmf_failover 00:29:28.144 ************************************ 00:29:28.144 05:44:31 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:28.144 05:44:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:28.144 05:44:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:28.145 05:44:31 -- common/autotest_common.sh@10 -- # set +x 00:29:28.145 ************************************ 00:29:28.145 START TEST nvmf_discovery 00:29:28.145 ************************************ 00:29:28.145 05:44:31 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:28.406 * Looking for test storage... 00:29:28.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:28.406 05:44:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:28.406 05:44:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:28.406 05:44:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:28.407 05:44:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:28.407 05:44:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:28.407 05:44:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:28.407 05:44:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:28.407 05:44:31 -- scripts/common.sh@335 -- # IFS=.-: 00:29:28.407 05:44:31 -- scripts/common.sh@335 -- # read -ra ver1 00:29:28.407 05:44:31 -- scripts/common.sh@336 -- # IFS=.-: 00:29:28.407 05:44:31 -- scripts/common.sh@336 -- # read -ra ver2 00:29:28.407 05:44:31 -- scripts/common.sh@337 -- # local 'op=<' 00:29:28.407 05:44:31 -- scripts/common.sh@339 -- # ver1_l=2 00:29:28.407 05:44:31 -- scripts/common.sh@340 -- # ver2_l=1 00:29:28.407 05:44:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:28.407 05:44:31 -- scripts/common.sh@343 -- # case "$op" in 00:29:28.407 05:44:31 -- scripts/common.sh@344 -- # : 1 00:29:28.407 05:44:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:28.407 05:44:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:28.407 05:44:31 -- scripts/common.sh@364 -- # decimal 1 00:29:28.407 05:44:31 -- scripts/common.sh@352 -- # local d=1 00:29:28.407 05:44:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:28.407 05:44:31 -- scripts/common.sh@354 -- # echo 1 00:29:28.407 05:44:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:28.407 05:44:31 -- scripts/common.sh@365 -- # decimal 2 00:29:28.407 05:44:31 -- scripts/common.sh@352 -- # local d=2 00:29:28.407 05:44:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:28.407 05:44:31 -- scripts/common.sh@354 -- # echo 2 00:29:28.407 05:44:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:28.407 05:44:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:28.407 05:44:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:28.407 05:44:31 -- scripts/common.sh@367 -- # return 0 00:29:28.407 05:44:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:28.407 05:44:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:28.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.407 --rc genhtml_branch_coverage=1 00:29:28.407 --rc genhtml_function_coverage=1 00:29:28.407 --rc genhtml_legend=1 00:29:28.407 --rc geninfo_all_blocks=1 00:29:28.407 --rc geninfo_unexecuted_blocks=1 00:29:28.407 00:29:28.407 ' 00:29:28.407 05:44:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:28.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.407 --rc genhtml_branch_coverage=1 00:29:28.407 --rc genhtml_function_coverage=1 00:29:28.407 --rc genhtml_legend=1 00:29:28.407 --rc geninfo_all_blocks=1 00:29:28.407 --rc geninfo_unexecuted_blocks=1 00:29:28.407 00:29:28.407 ' 00:29:28.407 05:44:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:28.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.407 --rc genhtml_branch_coverage=1 00:29:28.407 --rc genhtml_function_coverage=1 00:29:28.407 --rc genhtml_legend=1 00:29:28.407 --rc geninfo_all_blocks=1 00:29:28.407 --rc geninfo_unexecuted_blocks=1 00:29:28.407 00:29:28.407 ' 00:29:28.407 05:44:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:28.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.407 --rc genhtml_branch_coverage=1 00:29:28.407 --rc genhtml_function_coverage=1 00:29:28.407 --rc genhtml_legend=1 00:29:28.407 --rc geninfo_all_blocks=1 00:29:28.407 --rc geninfo_unexecuted_blocks=1 00:29:28.407 00:29:28.407 ' 00:29:28.407 05:44:31 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:28.407 05:44:31 -- nvmf/common.sh@7 -- # uname -s 00:29:28.407 05:44:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:28.407 05:44:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:28.407 05:44:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:28.407 05:44:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:28.407 05:44:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:28.407 05:44:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:28.407 05:44:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:28.407 05:44:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:28.407 05:44:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:28.407 05:44:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:28.407 05:44:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:28.407 05:44:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:28.407 05:44:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:28.407 05:44:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:28.407 05:44:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:28.407 05:44:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:28.407 05:44:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:28.407 05:44:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:28.407 05:44:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:28.407 05:44:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.407 05:44:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.407 05:44:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.407 05:44:31 -- paths/export.sh@5 -- # export PATH 00:29:28.407 05:44:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.407 05:44:31 -- nvmf/common.sh@46 -- # : 0 00:29:28.407 05:44:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:28.407 05:44:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:28.407 05:44:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:28.407 05:44:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:28.407 05:44:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:28.407 05:44:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:28.407 05:44:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:28.407 05:44:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:28.407 05:44:31 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:29:28.407 05:44:31 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:29:28.407 05:44:31 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:29:28.407 05:44:31 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:29:28.407 05:44:31 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:29:28.407 05:44:31 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:29:28.407 05:44:31 -- host/discovery.sh@25 -- # nvmftestinit 00:29:28.407 05:44:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:28.407 05:44:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:28.407 05:44:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:28.407 05:44:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:28.407 05:44:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:28.407 05:44:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.407 05:44:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:28.407 05:44:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.407 05:44:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:28.407 05:44:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:28.407 05:44:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:28.407 05:44:31 -- common/autotest_common.sh@10 -- # set +x 00:29:36.554 05:44:38 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:36.554 05:44:38 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:36.554 05:44:38 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:36.554 05:44:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:36.554 05:44:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:36.554 05:44:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:36.554 05:44:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:36.554 05:44:38 -- nvmf/common.sh@294 -- # net_devs=() 00:29:36.554 05:44:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:36.554 05:44:38 -- nvmf/common.sh@295 -- # e810=() 00:29:36.554 05:44:38 -- nvmf/common.sh@295 -- # local -ga e810 00:29:36.554 05:44:38 -- nvmf/common.sh@296 -- # x722=() 00:29:36.554 05:44:38 -- nvmf/common.sh@296 -- # local -ga x722 00:29:36.554 05:44:38 -- nvmf/common.sh@297 -- # mlx=() 00:29:36.554 05:44:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:36.554 05:44:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:36.554 05:44:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:36.554 05:44:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:36.554 05:44:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:36.554 05:44:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:36.554 05:44:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:36.554 05:44:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:36.554 05:44:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:36.554 05:44:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:36.554 05:44:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:36.554 05:44:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:36.554 05:44:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:36.554 05:44:38 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:36.554 05:44:38 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:36.554 05:44:38 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:36.554 05:44:38 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:36.554 05:44:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:36.554 05:44:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:36.554 05:44:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:36.554 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:36.554 05:44:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:36.554 05:44:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:36.554 05:44:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.554 05:44:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.554 05:44:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:36.554 05:44:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:36.554 05:44:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:36.554 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:36.554 05:44:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:36.554 05:44:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:36.554 05:44:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.554 05:44:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.554 05:44:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:36.554 05:44:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:36.554 05:44:38 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:36.554 05:44:38 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:36.554 05:44:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:36.554 05:44:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.554 05:44:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:36.554 05:44:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.554 05:44:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:36.554 Found net devices under 0000:31:00.0: cvl_0_0 00:29:36.554 05:44:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.554 05:44:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:36.554 05:44:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.554 05:44:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:36.554 05:44:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.554 05:44:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:36.554 Found net devices under 0000:31:00.1: cvl_0_1 00:29:36.554 05:44:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.554 05:44:38 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:36.554 05:44:38 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:36.554 05:44:38 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:36.554 05:44:38 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:36.554 05:44:38 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:36.554 05:44:38 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:36.554 05:44:38 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:36.554 05:44:38 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:36.554 05:44:38 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:36.554 05:44:38 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:36.554 05:44:38 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:36.554 05:44:38 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:36.554 05:44:38 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:36.554 05:44:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:36.554 05:44:38 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:36.554 05:44:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:36.554 05:44:38 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:36.554 05:44:38 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:36.554 05:44:38 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:36.554 05:44:38 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:36.554 05:44:38 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:36.554 05:44:38 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:36.554 05:44:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:36.554 05:44:38 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:36.554 05:44:38 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:36.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:36.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:29:36.554 00:29:36.554 --- 10.0.0.2 ping statistics --- 00:29:36.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.555 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:29:36.555 05:44:38 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:36.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:36.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:29:36.555 00:29:36.555 --- 10.0.0.1 ping statistics --- 00:29:36.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.555 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:29:36.555 05:44:38 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:36.555 05:44:38 -- nvmf/common.sh@410 -- # return 0 00:29:36.555 05:44:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:36.555 05:44:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:36.555 05:44:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:36.555 05:44:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:36.555 05:44:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:36.555 05:44:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:36.555 05:44:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:36.555 05:44:38 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:29:36.555 05:44:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:36.555 05:44:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:36.555 05:44:38 -- common/autotest_common.sh@10 -- # set +x 00:29:36.555 05:44:38 -- nvmf/common.sh@469 -- # nvmfpid=2001747 00:29:36.555 05:44:38 -- nvmf/common.sh@470 -- # waitforlisten 2001747 00:29:36.555 05:44:38 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:36.555 05:44:38 -- common/autotest_common.sh@829 -- # '[' -z 2001747 ']' 00:29:36.555 05:44:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:36.555 05:44:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:36.555 05:44:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:36.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:36.555 05:44:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:36.555 05:44:38 -- common/autotest_common.sh@10 -- # set +x 00:29:36.555 [2024-12-07 05:44:39.043203] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:36.555 [2024-12-07 05:44:39.043267] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:36.555 EAL: No free 2048 kB hugepages reported on node 1 00:29:36.555 [2024-12-07 05:44:39.132467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.555 [2024-12-07 05:44:39.216313] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:36.555 [2024-12-07 05:44:39.216451] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:36.555 [2024-12-07 05:44:39.216460] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:36.555 [2024-12-07 05:44:39.216468] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:36.555 [2024-12-07 05:44:39.216499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:36.817 05:44:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:36.817 05:44:39 -- common/autotest_common.sh@862 -- # return 0 00:29:36.817 05:44:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:36.817 05:44:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:36.817 05:44:39 -- common/autotest_common.sh@10 -- # set +x 00:29:36.817 05:44:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:36.817 05:44:39 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:36.817 05:44:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.817 05:44:39 -- common/autotest_common.sh@10 -- # set +x 00:29:36.817 [2024-12-07 05:44:39.912678] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:36.817 05:44:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.817 05:44:39 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:29:36.817 05:44:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.817 05:44:39 -- common/autotest_common.sh@10 -- # set +x 00:29:36.817 [2024-12-07 05:44:39.920838] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:36.817 05:44:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.817 05:44:39 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:29:36.817 05:44:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.817 05:44:39 -- common/autotest_common.sh@10 -- # set +x 00:29:36.817 null0 00:29:36.817 05:44:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.817 05:44:39 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:29:36.817 05:44:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.817 05:44:39 -- common/autotest_common.sh@10 -- # set +x 00:29:36.817 null1 00:29:36.817 05:44:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.817 05:44:39 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:29:36.817 05:44:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.817 05:44:39 -- common/autotest_common.sh@10 -- # set +x 00:29:36.817 05:44:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.817 05:44:39 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:29:36.817 05:44:39 -- host/discovery.sh@45 -- # hostpid=2002023 00:29:36.817 05:44:39 -- host/discovery.sh@46 -- # waitforlisten 2002023 /tmp/host.sock 00:29:36.817 05:44:39 -- common/autotest_common.sh@829 -- # '[' -z 2002023 ']' 00:29:36.817 05:44:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:29:36.817 05:44:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:36.817 05:44:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:36.817 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:36.817 05:44:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:36.817 05:44:39 -- common/autotest_common.sh@10 -- # set +x 00:29:36.817 [2024-12-07 05:44:39.970937] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:36.817 [2024-12-07 05:44:39.970973] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2002023 ] 00:29:36.817 EAL: No free 2048 kB hugepages reported on node 1 00:29:36.817 [2024-12-07 05:44:40.025964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.079 [2024-12-07 05:44:40.089660] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:37.079 [2024-12-07 05:44:40.089786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.652 05:44:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:37.653 05:44:40 -- common/autotest_common.sh@862 -- # return 0 00:29:37.653 05:44:40 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:37.653 05:44:40 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:29:37.653 05:44:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.653 05:44:40 -- common/autotest_common.sh@10 -- # set +x 00:29:37.653 05:44:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.653 05:44:40 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:29:37.653 05:44:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.653 05:44:40 -- common/autotest_common.sh@10 -- # set +x 00:29:37.653 05:44:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.653 05:44:40 -- host/discovery.sh@72 -- # notify_id=0 00:29:37.653 05:44:40 -- host/discovery.sh@78 -- # get_subsystem_names 00:29:37.653 05:44:40 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:37.653 05:44:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.653 05:44:40 -- common/autotest_common.sh@10 -- # set +x 00:29:37.653 05:44:40 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:37.653 05:44:40 -- host/discovery.sh@59 -- # sort 00:29:37.653 05:44:40 -- host/discovery.sh@59 -- # xargs 00:29:37.653 05:44:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.653 05:44:40 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:29:37.653 05:44:40 -- host/discovery.sh@79 -- # get_bdev_list 00:29:37.653 05:44:40 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:37.653 05:44:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.653 05:44:40 -- common/autotest_common.sh@10 -- # set +x 00:29:37.653 05:44:40 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:37.653 05:44:40 -- host/discovery.sh@55 -- # sort 00:29:37.653 05:44:40 -- host/discovery.sh@55 -- # xargs 00:29:37.653 05:44:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.653 05:44:40 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:29:37.653 05:44:40 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:29:37.653 05:44:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.653 05:44:40 -- common/autotest_common.sh@10 -- # set +x 00:29:37.653 05:44:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.653 05:44:40 -- host/discovery.sh@82 -- # get_subsystem_names 00:29:37.653 05:44:40 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:37.653 05:44:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.653 05:44:40 -- common/autotest_common.sh@10 -- # set +x 00:29:37.653 05:44:40 -- host/discovery.sh@59 -- # xargs 00:29:37.653 05:44:40 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:37.653 05:44:40 -- host/discovery.sh@59 -- # sort 00:29:37.653 05:44:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.914 05:44:40 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:29:37.914 05:44:40 -- host/discovery.sh@83 -- # get_bdev_list 00:29:37.914 05:44:40 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:37.914 05:44:40 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:37.914 05:44:40 -- host/discovery.sh@55 -- # sort 00:29:37.915 05:44:40 -- host/discovery.sh@55 -- # xargs 00:29:37.915 05:44:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.915 05:44:40 -- common/autotest_common.sh@10 -- # set +x 00:29:37.915 05:44:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.915 05:44:40 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:29:37.915 05:44:40 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:29:37.915 05:44:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.915 05:44:40 -- common/autotest_common.sh@10 -- # set +x 00:29:37.915 05:44:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.915 05:44:40 -- host/discovery.sh@86 -- # get_subsystem_names 00:29:37.915 05:44:40 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:37.915 05:44:40 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:37.915 05:44:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.915 05:44:40 -- common/autotest_common.sh@10 -- # set +x 00:29:37.915 05:44:40 -- host/discovery.sh@59 -- # sort 00:29:37.915 05:44:40 -- host/discovery.sh@59 -- # xargs 00:29:37.915 05:44:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.915 05:44:41 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:29:37.915 05:44:41 -- host/discovery.sh@87 -- # get_bdev_list 00:29:37.915 05:44:41 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:37.915 05:44:41 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:37.915 05:44:41 -- host/discovery.sh@55 -- # sort 00:29:37.915 05:44:41 -- host/discovery.sh@55 -- # xargs 00:29:37.915 05:44:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.915 05:44:41 -- common/autotest_common.sh@10 -- # set +x 00:29:37.915 05:44:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.915 05:44:41 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:29:37.915 05:44:41 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:37.915 05:44:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.915 05:44:41 -- common/autotest_common.sh@10 -- # set +x 00:29:37.915 [2024-12-07 05:44:41.059825] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:37.915 05:44:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.915 05:44:41 -- host/discovery.sh@92 -- # get_subsystem_names 00:29:37.915 05:44:41 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:37.915 05:44:41 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:37.915 05:44:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.915 05:44:41 -- common/autotest_common.sh@10 -- # set +x 00:29:37.915 05:44:41 -- host/discovery.sh@59 -- # sort 00:29:37.915 05:44:41 -- host/discovery.sh@59 -- # xargs 00:29:37.915 05:44:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.915 05:44:41 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:29:37.915 05:44:41 -- host/discovery.sh@93 -- # get_bdev_list 00:29:37.915 05:44:41 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:37.915 05:44:41 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:37.915 05:44:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.915 05:44:41 -- host/discovery.sh@55 -- # sort 00:29:37.915 05:44:41 -- common/autotest_common.sh@10 -- # set +x 00:29:37.915 05:44:41 -- host/discovery.sh@55 -- # xargs 00:29:37.915 05:44:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.176 05:44:41 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:29:38.176 05:44:41 -- host/discovery.sh@94 -- # get_notification_count 00:29:38.176 05:44:41 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:38.176 05:44:41 -- host/discovery.sh@74 -- # jq '. | length' 00:29:38.176 05:44:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.176 05:44:41 -- common/autotest_common.sh@10 -- # set +x 00:29:38.176 05:44:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.176 05:44:41 -- host/discovery.sh@74 -- # notification_count=0 00:29:38.176 05:44:41 -- host/discovery.sh@75 -- # notify_id=0 00:29:38.176 05:44:41 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:29:38.176 05:44:41 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:29:38.176 05:44:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.176 05:44:41 -- common/autotest_common.sh@10 -- # set +x 00:29:38.176 05:44:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.176 05:44:41 -- host/discovery.sh@100 -- # sleep 1 00:29:38.747 [2024-12-07 05:44:41.800910] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:38.747 [2024-12-07 05:44:41.800935] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:38.747 [2024-12-07 05:44:41.800948] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:38.747 [2024-12-07 05:44:41.928357] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:39.006 [2024-12-07 05:44:42.032516] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:39.006 [2024-12-07 05:44:42.032538] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:39.006 05:44:42 -- host/discovery.sh@101 -- # get_subsystem_names 00:29:39.006 05:44:42 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:39.006 05:44:42 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:39.006 05:44:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.006 05:44:42 -- common/autotest_common.sh@10 -- # set +x 00:29:39.006 05:44:42 -- host/discovery.sh@59 -- # sort 00:29:39.006 05:44:42 -- host/discovery.sh@59 -- # xargs 00:29:39.006 05:44:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.265 05:44:42 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.265 05:44:42 -- host/discovery.sh@102 -- # get_bdev_list 00:29:39.265 05:44:42 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:39.265 05:44:42 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:39.265 05:44:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.265 05:44:42 -- host/discovery.sh@55 -- # sort 00:29:39.265 05:44:42 -- common/autotest_common.sh@10 -- # set +x 00:29:39.265 05:44:42 -- host/discovery.sh@55 -- # xargs 00:29:39.265 05:44:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.265 05:44:42 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:29:39.265 05:44:42 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:29:39.265 05:44:42 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:39.265 05:44:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.265 05:44:42 -- common/autotest_common.sh@10 -- # set +x 00:29:39.265 05:44:42 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:39.265 05:44:42 -- host/discovery.sh@63 -- # sort -n 00:29:39.265 05:44:42 -- host/discovery.sh@63 -- # xargs 00:29:39.265 05:44:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.265 05:44:42 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:29:39.265 05:44:42 -- host/discovery.sh@104 -- # get_notification_count 00:29:39.265 05:44:42 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:39.265 05:44:42 -- host/discovery.sh@74 -- # jq '. | length' 00:29:39.265 05:44:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.265 05:44:42 -- common/autotest_common.sh@10 -- # set +x 00:29:39.265 05:44:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.265 05:44:42 -- host/discovery.sh@74 -- # notification_count=1 00:29:39.265 05:44:42 -- host/discovery.sh@75 -- # notify_id=1 00:29:39.265 05:44:42 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:29:39.265 05:44:42 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:29:39.265 05:44:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.265 05:44:42 -- common/autotest_common.sh@10 -- # set +x 00:29:39.265 05:44:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.265 05:44:42 -- host/discovery.sh@109 -- # sleep 1 00:29:40.207 05:44:43 -- host/discovery.sh@110 -- # get_bdev_list 00:29:40.207 05:44:43 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:40.207 05:44:43 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:40.207 05:44:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.207 05:44:43 -- host/discovery.sh@55 -- # sort 00:29:40.207 05:44:43 -- common/autotest_common.sh@10 -- # set +x 00:29:40.207 05:44:43 -- host/discovery.sh@55 -- # xargs 00:29:40.469 05:44:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.469 05:44:43 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:40.469 05:44:43 -- host/discovery.sh@111 -- # get_notification_count 00:29:40.469 05:44:43 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:29:40.469 05:44:43 -- host/discovery.sh@74 -- # jq '. | length' 00:29:40.469 05:44:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.469 05:44:43 -- common/autotest_common.sh@10 -- # set +x 00:29:40.469 05:44:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.469 05:44:43 -- host/discovery.sh@74 -- # notification_count=1 00:29:40.469 05:44:43 -- host/discovery.sh@75 -- # notify_id=2 00:29:40.469 05:44:43 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:29:40.469 05:44:43 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:29:40.469 05:44:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.469 05:44:43 -- common/autotest_common.sh@10 -- # set +x 00:29:40.469 [2024-12-07 05:44:43.534480] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:40.469 [2024-12-07 05:44:43.535389] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:40.469 [2024-12-07 05:44:43.535416] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:40.469 05:44:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.469 05:44:43 -- host/discovery.sh@117 -- # sleep 1 00:29:40.469 [2024-12-07 05:44:43.661811] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:29:41.040 [2024-12-07 05:44:43.971377] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:41.040 [2024-12-07 05:44:43.971400] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:41.040 [2024-12-07 05:44:43.971407] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:41.611 05:44:44 -- host/discovery.sh@118 -- # get_subsystem_names 00:29:41.611 05:44:44 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:41.611 05:44:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.611 05:44:44 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:41.611 05:44:44 -- common/autotest_common.sh@10 -- # set +x 00:29:41.611 05:44:44 -- host/discovery.sh@59 -- # sort 00:29:41.611 05:44:44 -- host/discovery.sh@59 -- # xargs 00:29:41.611 05:44:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.611 05:44:44 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.611 05:44:44 -- host/discovery.sh@119 -- # get_bdev_list 00:29:41.611 05:44:44 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:41.611 05:44:44 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:41.611 05:44:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.611 05:44:44 -- host/discovery.sh@55 -- # sort 00:29:41.611 05:44:44 -- common/autotest_common.sh@10 -- # set +x 00:29:41.611 05:44:44 -- host/discovery.sh@55 -- # xargs 00:29:41.611 05:44:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.611 05:44:44 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:41.611 05:44:44 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:29:41.611 05:44:44 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:41.611 05:44:44 -- host/discovery.sh@63 -- # xargs 00:29:41.611 05:44:44 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:41.611 05:44:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.611 05:44:44 -- common/autotest_common.sh@10 -- # set +x 00:29:41.611 05:44:44 -- host/discovery.sh@63 -- # sort -n 00:29:41.611 05:44:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.611 05:44:44 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:29:41.611 05:44:44 -- host/discovery.sh@121 -- # get_notification_count 00:29:41.611 05:44:44 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:41.611 05:44:44 -- host/discovery.sh@74 -- # jq '. | length' 00:29:41.611 05:44:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.611 05:44:44 -- common/autotest_common.sh@10 -- # set +x 00:29:41.611 05:44:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.611 05:44:44 -- host/discovery.sh@74 -- # notification_count=0 00:29:41.611 05:44:44 -- host/discovery.sh@75 -- # notify_id=2 00:29:41.611 05:44:44 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:29:41.611 05:44:44 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:41.611 05:44:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.611 05:44:44 -- common/autotest_common.sh@10 -- # set +x 00:29:41.611 [2024-12-07 05:44:44.746326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.611 [2024-12-07 05:44:44.746354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.611 [2024-12-07 05:44:44.746364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.611 [2024-12-07 05:44:44.746372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.611 [2024-12-07 05:44:44.746380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.611 [2024-12-07 05:44:44.746388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.611 [2024-12-07 05:44:44.746396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.611 [2024-12-07 05:44:44.746403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.611 [2024-12-07 05:44:44.746410] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa990d0 is same with the state(5) to be set 00:29:41.611 [2024-12-07 05:44:44.746514] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:41.611 [2024-12-07 05:44:44.746529] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:41.611 05:44:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.611 05:44:44 -- host/discovery.sh@127 -- # sleep 1 00:29:41.611 [2024-12-07 05:44:44.756335] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa990d0 (9): Bad file descriptor 00:29:41.611 [2024-12-07 05:44:44.766377] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:41.611 [2024-12-07 05:44:44.766716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.611 [2024-12-07 05:44:44.766801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.611 [2024-12-07 05:44:44.766811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa990d0 with addr=10.0.0.2, port=4420 00:29:41.611 [2024-12-07 05:44:44.766819] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa990d0 is same with the state(5) to be set 00:29:41.611 [2024-12-07 05:44:44.766832] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa990d0 (9): Bad file descriptor 00:29:41.611 [2024-12-07 05:44:44.766851] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:41.611 [2024-12-07 05:44:44.766860] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:41.611 [2024-12-07 05:44:44.766869] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:41.611 [2024-12-07 05:44:44.766885] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.611 [2024-12-07 05:44:44.776433] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:41.611 [2024-12-07 05:44:44.776730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.611 [2024-12-07 05:44:44.777038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.611 [2024-12-07 05:44:44.777049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa990d0 with addr=10.0.0.2, port=4420 00:29:41.611 [2024-12-07 05:44:44.777056] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa990d0 is same with the state(5) to be set 00:29:41.611 [2024-12-07 05:44:44.777068] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa990d0 (9): Bad file descriptor 00:29:41.611 [2024-12-07 05:44:44.777086] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:41.611 [2024-12-07 05:44:44.777093] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:41.611 [2024-12-07 05:44:44.777100] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:41.612 [2024-12-07 05:44:44.777111] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.612 [2024-12-07 05:44:44.786485] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:41.612 [2024-12-07 05:44:44.786562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.612 [2024-12-07 05:44:44.786827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.612 [2024-12-07 05:44:44.786837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa990d0 with addr=10.0.0.2, port=4420 00:29:41.612 [2024-12-07 05:44:44.786845] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa990d0 is same with the state(5) to be set 00:29:41.612 [2024-12-07 05:44:44.786856] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa990d0 (9): Bad file descriptor 00:29:41.612 [2024-12-07 05:44:44.786867] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:41.612 [2024-12-07 05:44:44.786873] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:41.612 [2024-12-07 05:44:44.786880] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:41.612 [2024-12-07 05:44:44.786891] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.612 [2024-12-07 05:44:44.796535] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:41.612 [2024-12-07 05:44:44.796842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.612 [2024-12-07 05:44:44.797174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.612 [2024-12-07 05:44:44.797185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa990d0 with addr=10.0.0.2, port=4420 00:29:41.612 [2024-12-07 05:44:44.797193] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa990d0 is same with the state(5) to be set 00:29:41.612 [2024-12-07 05:44:44.797205] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa990d0 (9): Bad file descriptor 00:29:41.612 [2024-12-07 05:44:44.797222] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:41.612 [2024-12-07 05:44:44.797229] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:41.612 [2024-12-07 05:44:44.797236] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:41.612 [2024-12-07 05:44:44.797247] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.612 [2024-12-07 05:44:44.806589] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:41.612 [2024-12-07 05:44:44.806904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.612 [2024-12-07 05:44:44.807104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.612 [2024-12-07 05:44:44.807114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa990d0 with addr=10.0.0.2, port=4420 00:29:41.612 [2024-12-07 05:44:44.807121] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa990d0 is same with the state(5) to be set 00:29:41.612 [2024-12-07 05:44:44.807133] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa990d0 (9): Bad file descriptor 00:29:41.612 [2024-12-07 05:44:44.807143] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:41.612 [2024-12-07 05:44:44.807149] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:41.612 [2024-12-07 05:44:44.807156] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:41.612 [2024-12-07 05:44:44.807167] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.612 [2024-12-07 05:44:44.816652] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:41.612 [2024-12-07 05:44:44.816862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.612 [2024-12-07 05:44:44.817049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.612 [2024-12-07 05:44:44.817060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa990d0 with addr=10.0.0.2, port=4420 00:29:41.612 [2024-12-07 05:44:44.817067] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa990d0 is same with the state(5) to be set 00:29:41.612 [2024-12-07 05:44:44.817079] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa990d0 (9): Bad file descriptor 00:29:41.612 [2024-12-07 05:44:44.817089] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:41.612 [2024-12-07 05:44:44.817095] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:41.612 [2024-12-07 05:44:44.817102] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:41.612 [2024-12-07 05:44:44.817113] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.612 [2024-12-07 05:44:44.826705] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:41.612 [2024-12-07 05:44:44.827002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.612 [2024-12-07 05:44:44.827303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.612 [2024-12-07 05:44:44.827314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa990d0 with addr=10.0.0.2, port=4420 00:29:41.612 [2024-12-07 05:44:44.827321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa990d0 is same with the state(5) to be set 00:29:41.612 [2024-12-07 05:44:44.827332] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa990d0 (9): Bad file descriptor 00:29:41.612 [2024-12-07 05:44:44.827350] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:41.612 [2024-12-07 05:44:44.827357] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:41.612 [2024-12-07 05:44:44.827364] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:41.612 [2024-12-07 05:44:44.827374] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.612 [2024-12-07 05:44:44.836757] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:41.612 [2024-12-07 05:44:44.837058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.612 [2024-12-07 05:44:44.837377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.612 [2024-12-07 05:44:44.837387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa990d0 with addr=10.0.0.2, port=4420 00:29:41.612 [2024-12-07 05:44:44.837394] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa990d0 is same with the state(5) to be set 00:29:41.612 [2024-12-07 05:44:44.837405] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa990d0 (9): Bad file descriptor 00:29:41.612 [2024-12-07 05:44:44.837422] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:41.612 [2024-12-07 05:44:44.837429] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:41.612 [2024-12-07 05:44:44.837436] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:41.612 [2024-12-07 05:44:44.837447] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.612 [2024-12-07 05:44:44.846810] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:41.612 [2024-12-07 05:44:44.847125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.612 [2024-12-07 05:44:44.847445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.612 [2024-12-07 05:44:44.847455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa990d0 with addr=10.0.0.2, port=4420 00:29:41.612 [2024-12-07 05:44:44.847462] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa990d0 is same with the state(5) to be set 00:29:41.612 [2024-12-07 05:44:44.847474] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa990d0 (9): Bad file descriptor 00:29:41.612 [2024-12-07 05:44:44.847492] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:41.612 [2024-12-07 05:44:44.847499] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:41.612 [2024-12-07 05:44:44.847506] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:41.612 [2024-12-07 05:44:44.847516] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.873 [2024-12-07 05:44:44.856861] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:41.873 [2024-12-07 05:44:44.857201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.873 [2024-12-07 05:44:44.857522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.873 [2024-12-07 05:44:44.857533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa990d0 with addr=10.0.0.2, port=4420 00:29:41.873 [2024-12-07 05:44:44.857540] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa990d0 is same with the state(5) to be set 00:29:41.873 [2024-12-07 05:44:44.857551] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa990d0 (9): Bad file descriptor 00:29:41.873 [2024-12-07 05:44:44.857568] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:41.873 [2024-12-07 05:44:44.857575] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:41.873 [2024-12-07 05:44:44.857582] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:41.873 [2024-12-07 05:44:44.857593] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.873 [2024-12-07 05:44:44.866913] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:41.873 [2024-12-07 05:44:44.867231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.873 [2024-12-07 05:44:44.867535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.873 [2024-12-07 05:44:44.867545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa990d0 with addr=10.0.0.2, port=4420 00:29:41.873 [2024-12-07 05:44:44.867556] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa990d0 is same with the state(5) to be set 00:29:41.873 [2024-12-07 05:44:44.867567] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa990d0 (9): Bad file descriptor 00:29:41.873 [2024-12-07 05:44:44.867577] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:41.873 [2024-12-07 05:44:44.867583] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:41.873 [2024-12-07 05:44:44.867590] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:41.873 [2024-12-07 05:44:44.867600] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.873 [2024-12-07 05:44:44.873639] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:29:41.873 [2024-12-07 05:44:44.873657] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:42.814 05:44:45 -- host/discovery.sh@128 -- # get_subsystem_names 00:29:42.814 05:44:45 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:42.814 05:44:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.814 05:44:45 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:42.814 05:44:45 -- common/autotest_common.sh@10 -- # set +x 00:29:42.814 05:44:45 -- host/discovery.sh@59 -- # sort 00:29:42.814 05:44:45 -- host/discovery.sh@59 -- # xargs 00:29:42.814 05:44:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.814 05:44:45 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.815 05:44:45 -- host/discovery.sh@129 -- # get_bdev_list 00:29:42.815 05:44:45 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:42.815 05:44:45 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:42.815 05:44:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.815 05:44:45 -- host/discovery.sh@55 -- # sort 00:29:42.815 05:44:45 -- common/autotest_common.sh@10 -- # set +x 00:29:42.815 05:44:45 -- host/discovery.sh@55 -- # xargs 00:29:42.815 05:44:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.815 05:44:45 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:42.815 05:44:45 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:29:42.815 05:44:45 -- host/discovery.sh@63 -- # xargs 00:29:42.815 05:44:45 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:42.815 05:44:45 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:42.815 05:44:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.815 05:44:45 -- host/discovery.sh@63 -- # sort -n 00:29:42.815 05:44:45 -- common/autotest_common.sh@10 -- # set +x 00:29:42.815 05:44:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.815 05:44:45 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:29:42.815 05:44:45 -- host/discovery.sh@131 -- # get_notification_count 00:29:42.815 05:44:45 -- host/discovery.sh@74 -- # jq '. | length' 00:29:42.815 05:44:45 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:42.815 05:44:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.815 05:44:45 -- common/autotest_common.sh@10 -- # set +x 00:29:42.815 05:44:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.815 05:44:45 -- host/discovery.sh@74 -- # notification_count=0 00:29:42.815 05:44:45 -- host/discovery.sh@75 -- # notify_id=2 00:29:42.815 05:44:45 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:29:42.815 05:44:45 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:29:42.815 05:44:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.815 05:44:45 -- common/autotest_common.sh@10 -- # set +x 00:29:42.815 05:44:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.815 05:44:45 -- host/discovery.sh@135 -- # sleep 1 00:29:43.759 05:44:46 -- host/discovery.sh@136 -- # get_subsystem_names 00:29:43.759 05:44:46 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:43.759 05:44:46 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:43.759 05:44:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.760 05:44:46 -- common/autotest_common.sh@10 -- # set +x 00:29:43.760 05:44:46 -- host/discovery.sh@59 -- # sort 00:29:43.760 05:44:46 -- host/discovery.sh@59 -- # xargs 00:29:43.760 05:44:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.021 05:44:47 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:29:44.021 05:44:47 -- host/discovery.sh@137 -- # get_bdev_list 00:29:44.021 05:44:47 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:44.021 05:44:47 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:44.021 05:44:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.021 05:44:47 -- host/discovery.sh@55 -- # sort 00:29:44.021 05:44:47 -- common/autotest_common.sh@10 -- # set +x 00:29:44.021 05:44:47 -- host/discovery.sh@55 -- # xargs 00:29:44.021 05:44:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.021 05:44:47 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:29:44.021 05:44:47 -- host/discovery.sh@138 -- # get_notification_count 00:29:44.021 05:44:47 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:44.021 05:44:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.021 05:44:47 -- common/autotest_common.sh@10 -- # set +x 00:29:44.021 05:44:47 -- host/discovery.sh@74 -- # jq '. | length' 00:29:44.021 05:44:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.021 05:44:47 -- host/discovery.sh@74 -- # notification_count=2 00:29:44.021 05:44:47 -- host/discovery.sh@75 -- # notify_id=4 00:29:44.021 05:44:47 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:29:44.021 05:44:47 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:44.021 05:44:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.021 05:44:47 -- common/autotest_common.sh@10 -- # set +x 00:29:44.964 [2024-12-07 05:44:48.158889] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:44.964 [2024-12-07 05:44:48.158909] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:44.964 [2024-12-07 05:44:48.158921] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:45.224 [2024-12-07 05:44:48.288332] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:29:45.485 [2024-12-07 05:44:48.556664] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:45.485 [2024-12-07 05:44:48.556694] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:45.485 05:44:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.485 05:44:48 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:45.485 05:44:48 -- common/autotest_common.sh@650 -- # local es=0 00:29:45.485 05:44:48 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:45.485 05:44:48 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:45.485 05:44:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:45.485 05:44:48 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:45.485 05:44:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:45.485 05:44:48 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:45.485 05:44:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.485 05:44:48 -- common/autotest_common.sh@10 -- # set +x 00:29:45.485 request: 00:29:45.485 { 00:29:45.485 "name": "nvme", 00:29:45.485 "trtype": "tcp", 00:29:45.485 "traddr": "10.0.0.2", 00:29:45.485 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:45.485 "adrfam": "ipv4", 00:29:45.485 "trsvcid": "8009", 00:29:45.485 "wait_for_attach": true, 00:29:45.485 "method": "bdev_nvme_start_discovery", 00:29:45.485 "req_id": 1 00:29:45.485 } 00:29:45.485 Got JSON-RPC error response 00:29:45.485 response: 00:29:45.485 { 00:29:45.485 "code": -17, 00:29:45.485 "message": "File exists" 00:29:45.485 } 00:29:45.485 05:44:48 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:45.485 05:44:48 -- common/autotest_common.sh@653 -- # es=1 00:29:45.485 05:44:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:45.485 05:44:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:45.485 05:44:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:45.485 05:44:48 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:29:45.485 05:44:48 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:45.485 05:44:48 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:45.485 05:44:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.485 05:44:48 -- common/autotest_common.sh@10 -- # set +x 00:29:45.485 05:44:48 -- host/discovery.sh@67 -- # sort 00:29:45.485 05:44:48 -- host/discovery.sh@67 -- # xargs 00:29:45.485 05:44:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.485 05:44:48 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:29:45.485 05:44:48 -- host/discovery.sh@147 -- # get_bdev_list 00:29:45.485 05:44:48 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:45.485 05:44:48 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:45.485 05:44:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.485 05:44:48 -- host/discovery.sh@55 -- # sort 00:29:45.485 05:44:48 -- common/autotest_common.sh@10 -- # set +x 00:29:45.485 05:44:48 -- host/discovery.sh@55 -- # xargs 00:29:45.485 05:44:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.485 05:44:48 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:45.485 05:44:48 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:45.485 05:44:48 -- common/autotest_common.sh@650 -- # local es=0 00:29:45.485 05:44:48 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:45.485 05:44:48 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:45.485 05:44:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:45.485 05:44:48 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:45.485 05:44:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:45.485 05:44:48 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:45.485 05:44:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.485 05:44:48 -- common/autotest_common.sh@10 -- # set +x 00:29:45.485 request: 00:29:45.485 { 00:29:45.485 "name": "nvme_second", 00:29:45.485 "trtype": "tcp", 00:29:45.485 "traddr": "10.0.0.2", 00:29:45.485 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:45.486 "adrfam": "ipv4", 00:29:45.486 "trsvcid": "8009", 00:29:45.486 "wait_for_attach": true, 00:29:45.486 "method": "bdev_nvme_start_discovery", 00:29:45.486 "req_id": 1 00:29:45.486 } 00:29:45.486 Got JSON-RPC error response 00:29:45.486 response: 00:29:45.486 { 00:29:45.486 "code": -17, 00:29:45.486 "message": "File exists" 00:29:45.486 } 00:29:45.486 05:44:48 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:45.486 05:44:48 -- common/autotest_common.sh@653 -- # es=1 00:29:45.486 05:44:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:45.486 05:44:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:45.486 05:44:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:45.486 05:44:48 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:29:45.486 05:44:48 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:45.486 05:44:48 -- host/discovery.sh@67 -- # xargs 00:29:45.486 05:44:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.486 05:44:48 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:45.486 05:44:48 -- common/autotest_common.sh@10 -- # set +x 00:29:45.486 05:44:48 -- host/discovery.sh@67 -- # sort 00:29:45.486 05:44:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.747 05:44:48 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:29:45.747 05:44:48 -- host/discovery.sh@153 -- # get_bdev_list 00:29:45.747 05:44:48 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:45.747 05:44:48 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:45.747 05:44:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.747 05:44:48 -- host/discovery.sh@55 -- # sort 00:29:45.747 05:44:48 -- common/autotest_common.sh@10 -- # set +x 00:29:45.747 05:44:48 -- host/discovery.sh@55 -- # xargs 00:29:45.747 05:44:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.747 05:44:48 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:45.747 05:44:48 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:45.747 05:44:48 -- common/autotest_common.sh@650 -- # local es=0 00:29:45.747 05:44:48 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:45.747 05:44:48 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:45.747 05:44:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:45.747 05:44:48 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:45.747 05:44:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:45.747 05:44:48 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:45.747 05:44:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.747 05:44:48 -- common/autotest_common.sh@10 -- # set +x 00:29:46.690 [2024-12-07 05:44:49.812158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.690 [2024-12-07 05:44:49.812381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.690 [2024-12-07 05:44:49.812394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab62a0 with addr=10.0.0.2, port=8010 00:29:46.690 [2024-12-07 05:44:49.812406] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:46.690 [2024-12-07 05:44:49.812412] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:46.690 [2024-12-07 05:44:49.812420] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:47.631 [2024-12-07 05:44:50.814306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.631 [2024-12-07 05:44:50.814617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.631 [2024-12-07 05:44:50.814628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xac8130 with addr=10.0.0.2, port=8010 00:29:47.631 [2024-12-07 05:44:50.814639] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:47.631 [2024-12-07 05:44:50.814646] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:47.631 [2024-12-07 05:44:50.814653] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:49.017 [2024-12-07 05:44:51.816486] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:29:49.017 request: 00:29:49.017 { 00:29:49.017 "name": "nvme_second", 00:29:49.017 "trtype": "tcp", 00:29:49.017 "traddr": "10.0.0.2", 00:29:49.017 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:49.017 "adrfam": "ipv4", 00:29:49.017 "trsvcid": "8010", 00:29:49.017 "attach_timeout_ms": 3000, 00:29:49.017 "method": "bdev_nvme_start_discovery", 00:29:49.017 "req_id": 1 00:29:49.017 } 00:29:49.017 Got JSON-RPC error response 00:29:49.017 response: 00:29:49.017 { 00:29:49.017 "code": -110, 00:29:49.017 "message": "Connection timed out" 00:29:49.017 } 00:29:49.017 05:44:51 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:49.017 05:44:51 -- common/autotest_common.sh@653 -- # es=1 00:29:49.017 05:44:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:49.017 05:44:51 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:49.017 05:44:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:49.017 05:44:51 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:29:49.017 05:44:51 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:49.017 05:44:51 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:49.017 05:44:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.017 05:44:51 -- host/discovery.sh@67 -- # sort 00:29:49.017 05:44:51 -- common/autotest_common.sh@10 -- # set +x 00:29:49.017 05:44:51 -- host/discovery.sh@67 -- # xargs 00:29:49.017 05:44:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.017 05:44:51 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:29:49.017 05:44:51 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:29:49.017 05:44:51 -- host/discovery.sh@162 -- # kill 2002023 00:29:49.017 05:44:51 -- host/discovery.sh@163 -- # nvmftestfini 00:29:49.017 05:44:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:49.017 05:44:51 -- nvmf/common.sh@116 -- # sync 00:29:49.017 05:44:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:49.017 05:44:51 -- nvmf/common.sh@119 -- # set +e 00:29:49.017 05:44:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:49.017 05:44:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:49.017 rmmod nvme_tcp 00:29:49.017 rmmod nvme_fabrics 00:29:49.017 rmmod nvme_keyring 00:29:49.017 05:44:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:49.017 05:44:51 -- nvmf/common.sh@123 -- # set -e 00:29:49.017 05:44:51 -- nvmf/common.sh@124 -- # return 0 00:29:49.017 05:44:51 -- nvmf/common.sh@477 -- # '[' -n 2001747 ']' 00:29:49.017 05:44:51 -- nvmf/common.sh@478 -- # killprocess 2001747 00:29:49.017 05:44:51 -- common/autotest_common.sh@936 -- # '[' -z 2001747 ']' 00:29:49.017 05:44:51 -- common/autotest_common.sh@940 -- # kill -0 2001747 00:29:49.017 05:44:51 -- common/autotest_common.sh@941 -- # uname 00:29:49.017 05:44:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:49.017 05:44:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2001747 00:29:49.017 05:44:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:49.017 05:44:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:49.017 05:44:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2001747' 00:29:49.017 killing process with pid 2001747 00:29:49.017 05:44:52 -- common/autotest_common.sh@955 -- # kill 2001747 00:29:49.017 05:44:52 -- common/autotest_common.sh@960 -- # wait 2001747 00:29:49.017 05:44:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:49.017 05:44:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:49.017 05:44:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:49.017 05:44:52 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:49.017 05:44:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:49.017 05:44:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.017 05:44:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:49.017 05:44:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.565 05:44:54 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:51.565 00:29:51.565 real 0m22.841s 00:29:51.565 user 0m28.484s 00:29:51.565 sys 0m7.051s 00:29:51.565 05:44:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:51.565 05:44:54 -- common/autotest_common.sh@10 -- # set +x 00:29:51.565 ************************************ 00:29:51.565 END TEST nvmf_discovery 00:29:51.565 ************************************ 00:29:51.565 05:44:54 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:51.565 05:44:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:51.565 05:44:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:51.565 05:44:54 -- common/autotest_common.sh@10 -- # set +x 00:29:51.565 ************************************ 00:29:51.565 START TEST nvmf_discovery_remove_ifc 00:29:51.565 ************************************ 00:29:51.565 05:44:54 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:51.565 * Looking for test storage... 00:29:51.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:51.565 05:44:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:51.565 05:44:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:51.565 05:44:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:51.565 05:44:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:51.565 05:44:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:51.565 05:44:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:51.565 05:44:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:51.565 05:44:54 -- scripts/common.sh@335 -- # IFS=.-: 00:29:51.565 05:44:54 -- scripts/common.sh@335 -- # read -ra ver1 00:29:51.565 05:44:54 -- scripts/common.sh@336 -- # IFS=.-: 00:29:51.565 05:44:54 -- scripts/common.sh@336 -- # read -ra ver2 00:29:51.565 05:44:54 -- scripts/common.sh@337 -- # local 'op=<' 00:29:51.565 05:44:54 -- scripts/common.sh@339 -- # ver1_l=2 00:29:51.565 05:44:54 -- scripts/common.sh@340 -- # ver2_l=1 00:29:51.565 05:44:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:51.565 05:44:54 -- scripts/common.sh@343 -- # case "$op" in 00:29:51.565 05:44:54 -- scripts/common.sh@344 -- # : 1 00:29:51.565 05:44:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:51.565 05:44:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:51.565 05:44:54 -- scripts/common.sh@364 -- # decimal 1 00:29:51.565 05:44:54 -- scripts/common.sh@352 -- # local d=1 00:29:51.565 05:44:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:51.565 05:44:54 -- scripts/common.sh@354 -- # echo 1 00:29:51.565 05:44:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:51.565 05:44:54 -- scripts/common.sh@365 -- # decimal 2 00:29:51.565 05:44:54 -- scripts/common.sh@352 -- # local d=2 00:29:51.565 05:44:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:51.565 05:44:54 -- scripts/common.sh@354 -- # echo 2 00:29:51.565 05:44:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:51.565 05:44:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:51.565 05:44:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:51.565 05:44:54 -- scripts/common.sh@367 -- # return 0 00:29:51.565 05:44:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:51.565 05:44:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:51.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.565 --rc genhtml_branch_coverage=1 00:29:51.565 --rc genhtml_function_coverage=1 00:29:51.565 --rc genhtml_legend=1 00:29:51.565 --rc geninfo_all_blocks=1 00:29:51.565 --rc geninfo_unexecuted_blocks=1 00:29:51.565 00:29:51.565 ' 00:29:51.565 05:44:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:51.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.565 --rc genhtml_branch_coverage=1 00:29:51.565 --rc genhtml_function_coverage=1 00:29:51.565 --rc genhtml_legend=1 00:29:51.565 --rc geninfo_all_blocks=1 00:29:51.565 --rc geninfo_unexecuted_blocks=1 00:29:51.565 00:29:51.565 ' 00:29:51.565 05:44:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:51.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.565 --rc genhtml_branch_coverage=1 00:29:51.565 --rc genhtml_function_coverage=1 00:29:51.565 --rc genhtml_legend=1 00:29:51.565 --rc geninfo_all_blocks=1 00:29:51.565 --rc geninfo_unexecuted_blocks=1 00:29:51.565 00:29:51.565 ' 00:29:51.565 05:44:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:51.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.565 --rc genhtml_branch_coverage=1 00:29:51.565 --rc genhtml_function_coverage=1 00:29:51.565 --rc genhtml_legend=1 00:29:51.565 --rc geninfo_all_blocks=1 00:29:51.565 --rc geninfo_unexecuted_blocks=1 00:29:51.565 00:29:51.565 ' 00:29:51.565 05:44:54 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:51.565 05:44:54 -- nvmf/common.sh@7 -- # uname -s 00:29:51.565 05:44:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:51.565 05:44:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:51.565 05:44:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:51.565 05:44:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:51.565 05:44:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:51.565 05:44:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:51.565 05:44:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:51.565 05:44:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:51.565 05:44:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:51.565 05:44:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:51.565 05:44:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:51.565 05:44:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:51.565 05:44:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:51.565 05:44:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:51.565 05:44:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:51.565 05:44:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:51.565 05:44:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:51.565 05:44:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:51.565 05:44:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:51.565 05:44:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.565 05:44:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.566 05:44:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.566 05:44:54 -- paths/export.sh@5 -- # export PATH 00:29:51.566 05:44:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.566 05:44:54 -- nvmf/common.sh@46 -- # : 0 00:29:51.566 05:44:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:51.566 05:44:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:51.566 05:44:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:51.566 05:44:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:51.566 05:44:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:51.566 05:44:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:51.566 05:44:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:51.566 05:44:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:51.566 05:44:54 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:29:51.566 05:44:54 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:29:51.566 05:44:54 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:29:51.566 05:44:54 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:29:51.566 05:44:54 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:29:51.566 05:44:54 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:29:51.566 05:44:54 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:29:51.566 05:44:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:51.566 05:44:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:51.566 05:44:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:51.566 05:44:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:51.566 05:44:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:51.566 05:44:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:51.566 05:44:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:51.566 05:44:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.566 05:44:54 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:51.566 05:44:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:51.566 05:44:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:51.566 05:44:54 -- common/autotest_common.sh@10 -- # set +x 00:29:59.707 05:45:01 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:59.707 05:45:01 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:59.707 05:45:01 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:59.707 05:45:01 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:59.707 05:45:01 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:59.707 05:45:01 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:59.707 05:45:01 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:59.707 05:45:01 -- nvmf/common.sh@294 -- # net_devs=() 00:29:59.707 05:45:01 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:59.707 05:45:01 -- nvmf/common.sh@295 -- # e810=() 00:29:59.707 05:45:01 -- nvmf/common.sh@295 -- # local -ga e810 00:29:59.707 05:45:01 -- nvmf/common.sh@296 -- # x722=() 00:29:59.707 05:45:01 -- nvmf/common.sh@296 -- # local -ga x722 00:29:59.707 05:45:01 -- nvmf/common.sh@297 -- # mlx=() 00:29:59.707 05:45:01 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:59.707 05:45:01 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:59.707 05:45:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:59.707 05:45:01 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:59.707 05:45:01 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:59.707 05:45:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:59.707 05:45:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:59.707 05:45:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:59.707 05:45:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:59.707 05:45:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:59.707 05:45:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:59.707 05:45:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:59.707 05:45:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:59.707 05:45:01 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:59.707 05:45:01 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:59.707 05:45:01 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:59.707 05:45:01 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:59.707 05:45:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:59.707 05:45:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:59.707 05:45:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:59.707 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:59.707 05:45:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:59.707 05:45:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:59.707 05:45:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.707 05:45:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.707 05:45:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:59.707 05:45:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:59.707 05:45:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:59.707 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:59.707 05:45:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:59.707 05:45:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:59.707 05:45:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.707 05:45:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.707 05:45:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:59.707 05:45:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:59.707 05:45:01 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:59.707 05:45:01 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:59.707 05:45:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:59.707 05:45:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.707 05:45:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:59.707 05:45:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.707 05:45:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:59.707 Found net devices under 0000:31:00.0: cvl_0_0 00:29:59.707 05:45:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.707 05:45:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:59.707 05:45:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.707 05:45:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:59.707 05:45:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.707 05:45:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:59.707 Found net devices under 0000:31:00.1: cvl_0_1 00:29:59.707 05:45:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.707 05:45:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:59.707 05:45:01 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:59.707 05:45:01 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:59.707 05:45:01 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:59.707 05:45:01 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:59.707 05:45:01 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:59.707 05:45:01 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:59.707 05:45:01 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:59.707 05:45:01 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:59.707 05:45:01 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:59.707 05:45:01 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:59.707 05:45:01 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:59.707 05:45:01 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:59.707 05:45:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:59.707 05:45:01 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:59.707 05:45:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:59.707 05:45:01 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:59.707 05:45:01 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:59.707 05:45:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:59.707 05:45:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:59.707 05:45:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:59.707 05:45:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:59.707 05:45:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:59.707 05:45:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:59.707 05:45:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:59.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:59.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:29:59.707 00:29:59.707 --- 10.0.0.2 ping statistics --- 00:29:59.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.707 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:29:59.707 05:45:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:59.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:59.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:29:59.707 00:29:59.707 --- 10.0.0.1 ping statistics --- 00:29:59.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.707 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:29:59.707 05:45:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:59.707 05:45:02 -- nvmf/common.sh@410 -- # return 0 00:29:59.707 05:45:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:59.707 05:45:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:59.707 05:45:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:59.707 05:45:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:59.707 05:45:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:59.707 05:45:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:59.707 05:45:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:59.707 05:45:02 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:29:59.707 05:45:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:59.707 05:45:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:59.707 05:45:02 -- common/autotest_common.sh@10 -- # set +x 00:29:59.707 05:45:02 -- nvmf/common.sh@469 -- # nvmfpid=2008772 00:29:59.707 05:45:02 -- nvmf/common.sh@470 -- # waitforlisten 2008772 00:29:59.707 05:45:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:59.707 05:45:02 -- common/autotest_common.sh@829 -- # '[' -z 2008772 ']' 00:29:59.707 05:45:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.707 05:45:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:59.707 05:45:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.707 05:45:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:59.707 05:45:02 -- common/autotest_common.sh@10 -- # set +x 00:29:59.707 [2024-12-07 05:45:02.111020] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:59.707 [2024-12-07 05:45:02.111088] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:59.707 EAL: No free 2048 kB hugepages reported on node 1 00:29:59.707 [2024-12-07 05:45:02.201180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.707 [2024-12-07 05:45:02.290707] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:59.707 [2024-12-07 05:45:02.290851] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:59.707 [2024-12-07 05:45:02.290861] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:59.707 [2024-12-07 05:45:02.290869] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:59.707 [2024-12-07 05:45:02.290892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:59.707 05:45:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:59.707 05:45:02 -- common/autotest_common.sh@862 -- # return 0 00:29:59.707 05:45:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:59.707 05:45:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:59.707 05:45:02 -- common/autotest_common.sh@10 -- # set +x 00:29:59.968 05:45:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:59.968 05:45:02 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:29:59.968 05:45:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.968 05:45:02 -- common/autotest_common.sh@10 -- # set +x 00:29:59.968 [2024-12-07 05:45:02.967329] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:59.968 [2024-12-07 05:45:02.975562] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:59.968 null0 00:29:59.968 [2024-12-07 05:45:03.007560] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:59.968 05:45:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.968 05:45:03 -- host/discovery_remove_ifc.sh@59 -- # hostpid=2009097 00:29:59.968 05:45:03 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2009097 /tmp/host.sock 00:29:59.968 05:45:03 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:29:59.968 05:45:03 -- common/autotest_common.sh@829 -- # '[' -z 2009097 ']' 00:29:59.968 05:45:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:29:59.968 05:45:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:59.968 05:45:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:59.968 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:59.968 05:45:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:59.968 05:45:03 -- common/autotest_common.sh@10 -- # set +x 00:29:59.968 [2024-12-07 05:45:03.077296] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:59.968 [2024-12-07 05:45:03.077357] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2009097 ] 00:29:59.968 EAL: No free 2048 kB hugepages reported on node 1 00:29:59.968 [2024-12-07 05:45:03.143423] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.228 [2024-12-07 05:45:03.215821] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:00.228 [2024-12-07 05:45:03.215962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.797 05:45:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:00.797 05:45:03 -- common/autotest_common.sh@862 -- # return 0 00:30:00.797 05:45:03 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:00.797 05:45:03 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:30:00.797 05:45:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.797 05:45:03 -- common/autotest_common.sh@10 -- # set +x 00:30:00.797 05:45:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.797 05:45:03 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:30:00.797 05:45:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.797 05:45:03 -- common/autotest_common.sh@10 -- # set +x 00:30:00.797 05:45:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.797 05:45:03 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:30:00.797 05:45:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.797 05:45:03 -- common/autotest_common.sh@10 -- # set +x 00:30:01.751 [2024-12-07 05:45:04.955403] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:01.751 [2024-12-07 05:45:04.955428] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:01.751 [2024-12-07 05:45:04.955441] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:02.010 [2024-12-07 05:45:05.082855] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:02.270 [2024-12-07 05:45:05.268625] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:02.270 [2024-12-07 05:45:05.268667] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:02.270 [2024-12-07 05:45:05.268691] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:02.270 [2024-12-07 05:45:05.268704] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:02.270 [2024-12-07 05:45:05.268724] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:02.270 05:45:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.270 05:45:05 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:30:02.270 05:45:05 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:02.270 05:45:05 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:02.270 [2024-12-07 05:45:05.275322] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1bb75b0 was disconnected and freed. delete nvme_qpair. 00:30:02.270 05:45:05 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:02.270 05:45:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.270 05:45:05 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:02.270 05:45:05 -- common/autotest_common.sh@10 -- # set +x 00:30:02.270 05:45:05 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:02.270 05:45:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.270 05:45:05 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:30:02.270 05:45:05 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:30:02.270 05:45:05 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:30:02.270 05:45:05 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:30:02.270 05:45:05 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:02.270 05:45:05 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:02.270 05:45:05 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:02.270 05:45:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.270 05:45:05 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:02.270 05:45:05 -- common/autotest_common.sh@10 -- # set +x 00:30:02.270 05:45:05 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:02.270 05:45:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.529 05:45:05 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:02.529 05:45:05 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:03.478 05:45:06 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:03.478 05:45:06 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:03.478 05:45:06 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:03.478 05:45:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.478 05:45:06 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:03.478 05:45:06 -- common/autotest_common.sh@10 -- # set +x 00:30:03.478 05:45:06 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:03.478 05:45:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.478 05:45:06 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:03.478 05:45:06 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:04.415 05:45:07 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:04.415 05:45:07 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:04.415 05:45:07 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:04.415 05:45:07 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:04.415 05:45:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.415 05:45:07 -- common/autotest_common.sh@10 -- # set +x 00:30:04.415 05:45:07 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:04.415 05:45:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.415 05:45:07 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:04.415 05:45:07 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:05.797 05:45:08 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:05.797 05:45:08 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:05.797 05:45:08 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:05.797 05:45:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.797 05:45:08 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:05.797 05:45:08 -- common/autotest_common.sh@10 -- # set +x 00:30:05.797 05:45:08 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:05.797 05:45:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.797 05:45:08 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:05.797 05:45:08 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:06.741 05:45:09 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:06.741 05:45:09 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:06.741 05:45:09 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:06.741 05:45:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.741 05:45:09 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:06.741 05:45:09 -- common/autotest_common.sh@10 -- # set +x 00:30:06.741 05:45:09 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:06.741 05:45:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.741 05:45:09 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:06.741 05:45:09 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:07.683 [2024-12-07 05:45:10.709245] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:30:07.683 [2024-12-07 05:45:10.709294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:07.683 [2024-12-07 05:45:10.709305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.683 [2024-12-07 05:45:10.709315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:07.683 [2024-12-07 05:45:10.709323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.683 [2024-12-07 05:45:10.709332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:07.683 [2024-12-07 05:45:10.709339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.683 [2024-12-07 05:45:10.709347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:07.683 [2024-12-07 05:45:10.709355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.683 [2024-12-07 05:45:10.709363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:07.683 [2024-12-07 05:45:10.709371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.683 [2024-12-07 05:45:10.709378] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7dda0 is same with the state(5) to be set 00:30:07.683 [2024-12-07 05:45:10.719259] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7dda0 (9): Bad file descriptor 00:30:07.683 [2024-12-07 05:45:10.729300] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:07.683 05:45:10 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:07.683 05:45:10 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:07.683 05:45:10 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:07.683 05:45:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.683 05:45:10 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:07.683 05:45:10 -- common/autotest_common.sh@10 -- # set +x 00:30:07.683 05:45:10 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:08.625 [2024-12-07 05:45:11.752054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:30:09.646 [2024-12-07 05:45:12.776076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:30:09.646 [2024-12-07 05:45:12.776119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7dda0 with addr=10.0.0.2, port=4420 00:30:09.646 [2024-12-07 05:45:12.776138] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7dda0 is same with the state(5) to be set 00:30:09.646 [2024-12-07 05:45:12.776165] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:09.646 [2024-12-07 05:45:12.776174] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:09.646 [2024-12-07 05:45:12.776181] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:09.646 [2024-12-07 05:45:12.776189] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:30:09.646 [2024-12-07 05:45:12.776538] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7dda0 (9): Bad file descriptor 00:30:09.646 [2024-12-07 05:45:12.776561] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.646 [2024-12-07 05:45:12.776582] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:30:09.646 [2024-12-07 05:45:12.776604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.646 [2024-12-07 05:45:12.776614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.646 [2024-12-07 05:45:12.776624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.646 [2024-12-07 05:45:12.776632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.646 [2024-12-07 05:45:12.776640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.646 [2024-12-07 05:45:12.776647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.646 [2024-12-07 05:45:12.776656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.646 [2024-12-07 05:45:12.776663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.646 [2024-12-07 05:45:12.776672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.646 [2024-12-07 05:45:12.776679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.646 [2024-12-07 05:45:12.776686] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:30:09.646 [2024-12-07 05:45:12.777165] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e1b0 (9): Bad file descriptor 00:30:09.646 [2024-12-07 05:45:12.778180] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:30:09.646 [2024-12-07 05:45:12.778191] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:30:09.646 05:45:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.646 05:45:12 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:09.646 05:45:12 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:10.583 05:45:13 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:10.583 05:45:13 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:10.583 05:45:13 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:10.583 05:45:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.583 05:45:13 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:10.583 05:45:13 -- common/autotest_common.sh@10 -- # set +x 00:30:10.583 05:45:13 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:10.583 05:45:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.842 05:45:13 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:30:10.842 05:45:13 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:10.842 05:45:13 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:10.842 05:45:13 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:30:10.842 05:45:13 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:10.842 05:45:13 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:10.842 05:45:13 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:10.842 05:45:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.842 05:45:13 -- common/autotest_common.sh@10 -- # set +x 00:30:10.842 05:45:13 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:10.842 05:45:13 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:10.842 05:45:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.842 05:45:13 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:10.842 05:45:13 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:11.779 [2024-12-07 05:45:14.791033] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:11.779 [2024-12-07 05:45:14.791054] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:11.779 [2024-12-07 05:45:14.791067] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:11.779 [2024-12-07 05:45:14.918475] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:30:11.779 [2024-12-07 05:45:14.980069] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:11.779 [2024-12-07 05:45:14.980105] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:11.779 [2024-12-07 05:45:14.980124] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:11.779 [2024-12-07 05:45:14.980139] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:30:11.779 [2024-12-07 05:45:14.980147] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:11.779 [2024-12-07 05:45:14.989120] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1b8c9d0 was disconnected and freed. delete nvme_qpair. 00:30:11.779 05:45:15 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:11.779 05:45:15 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:11.779 05:45:15 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:11.779 05:45:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.779 05:45:15 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:11.779 05:45:15 -- common/autotest_common.sh@10 -- # set +x 00:30:11.780 05:45:15 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:12.038 05:45:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.038 05:45:15 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:30:12.038 05:45:15 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:30:12.038 05:45:15 -- host/discovery_remove_ifc.sh@90 -- # killprocess 2009097 00:30:12.038 05:45:15 -- common/autotest_common.sh@936 -- # '[' -z 2009097 ']' 00:30:12.038 05:45:15 -- common/autotest_common.sh@940 -- # kill -0 2009097 00:30:12.038 05:45:15 -- common/autotest_common.sh@941 -- # uname 00:30:12.038 05:45:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:12.038 05:45:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2009097 00:30:12.038 05:45:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:12.038 05:45:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:12.038 05:45:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2009097' 00:30:12.038 killing process with pid 2009097 00:30:12.038 05:45:15 -- common/autotest_common.sh@955 -- # kill 2009097 00:30:12.038 05:45:15 -- common/autotest_common.sh@960 -- # wait 2009097 00:30:12.038 05:45:15 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:30:12.038 05:45:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:12.038 05:45:15 -- nvmf/common.sh@116 -- # sync 00:30:12.038 05:45:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:12.038 05:45:15 -- nvmf/common.sh@119 -- # set +e 00:30:12.038 05:45:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:12.038 05:45:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:12.038 rmmod nvme_tcp 00:30:12.038 rmmod nvme_fabrics 00:30:12.298 rmmod nvme_keyring 00:30:12.298 05:45:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:12.298 05:45:15 -- nvmf/common.sh@123 -- # set -e 00:30:12.298 05:45:15 -- nvmf/common.sh@124 -- # return 0 00:30:12.298 05:45:15 -- nvmf/common.sh@477 -- # '[' -n 2008772 ']' 00:30:12.298 05:45:15 -- nvmf/common.sh@478 -- # killprocess 2008772 00:30:12.298 05:45:15 -- common/autotest_common.sh@936 -- # '[' -z 2008772 ']' 00:30:12.298 05:45:15 -- common/autotest_common.sh@940 -- # kill -0 2008772 00:30:12.298 05:45:15 -- common/autotest_common.sh@941 -- # uname 00:30:12.298 05:45:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:12.298 05:45:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2008772 00:30:12.298 05:45:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:12.298 05:45:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:12.298 05:45:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2008772' 00:30:12.298 killing process with pid 2008772 00:30:12.298 05:45:15 -- common/autotest_common.sh@955 -- # kill 2008772 00:30:12.298 05:45:15 -- common/autotest_common.sh@960 -- # wait 2008772 00:30:12.298 05:45:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:12.298 05:45:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:12.298 05:45:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:12.298 05:45:15 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:12.298 05:45:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:12.298 05:45:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:12.298 05:45:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:12.298 05:45:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.853 05:45:17 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:14.853 00:30:14.853 real 0m23.298s 00:30:14.853 user 0m26.076s 00:30:14.853 sys 0m7.014s 00:30:14.853 05:45:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:14.853 05:45:17 -- common/autotest_common.sh@10 -- # set +x 00:30:14.853 ************************************ 00:30:14.853 END TEST nvmf_discovery_remove_ifc 00:30:14.853 ************************************ 00:30:14.853 05:45:17 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:30:14.853 05:45:17 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:14.853 05:45:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:14.853 05:45:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:14.853 05:45:17 -- common/autotest_common.sh@10 -- # set +x 00:30:14.853 ************************************ 00:30:14.853 START TEST nvmf_digest 00:30:14.853 ************************************ 00:30:14.853 05:45:17 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:14.853 * Looking for test storage... 00:30:14.853 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:14.853 05:45:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:30:14.853 05:45:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:30:14.853 05:45:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:30:14.853 05:45:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:30:14.853 05:45:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:30:14.853 05:45:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:30:14.853 05:45:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:30:14.853 05:45:17 -- scripts/common.sh@335 -- # IFS=.-: 00:30:14.853 05:45:17 -- scripts/common.sh@335 -- # read -ra ver1 00:30:14.853 05:45:17 -- scripts/common.sh@336 -- # IFS=.-: 00:30:14.853 05:45:17 -- scripts/common.sh@336 -- # read -ra ver2 00:30:14.853 05:45:17 -- scripts/common.sh@337 -- # local 'op=<' 00:30:14.853 05:45:17 -- scripts/common.sh@339 -- # ver1_l=2 00:30:14.853 05:45:17 -- scripts/common.sh@340 -- # ver2_l=1 00:30:14.853 05:45:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:30:14.853 05:45:17 -- scripts/common.sh@343 -- # case "$op" in 00:30:14.853 05:45:17 -- scripts/common.sh@344 -- # : 1 00:30:14.853 05:45:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:30:14.853 05:45:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:14.853 05:45:17 -- scripts/common.sh@364 -- # decimal 1 00:30:14.853 05:45:17 -- scripts/common.sh@352 -- # local d=1 00:30:14.853 05:45:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:14.853 05:45:17 -- scripts/common.sh@354 -- # echo 1 00:30:14.853 05:45:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:30:14.853 05:45:17 -- scripts/common.sh@365 -- # decimal 2 00:30:14.853 05:45:17 -- scripts/common.sh@352 -- # local d=2 00:30:14.853 05:45:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:14.853 05:45:17 -- scripts/common.sh@354 -- # echo 2 00:30:14.853 05:45:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:30:14.853 05:45:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:30:14.853 05:45:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:30:14.853 05:45:17 -- scripts/common.sh@367 -- # return 0 00:30:14.853 05:45:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:14.853 05:45:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:30:14.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.853 --rc genhtml_branch_coverage=1 00:30:14.853 --rc genhtml_function_coverage=1 00:30:14.853 --rc genhtml_legend=1 00:30:14.853 --rc geninfo_all_blocks=1 00:30:14.853 --rc geninfo_unexecuted_blocks=1 00:30:14.853 00:30:14.853 ' 00:30:14.853 05:45:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:30:14.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.853 --rc genhtml_branch_coverage=1 00:30:14.853 --rc genhtml_function_coverage=1 00:30:14.853 --rc genhtml_legend=1 00:30:14.853 --rc geninfo_all_blocks=1 00:30:14.853 --rc geninfo_unexecuted_blocks=1 00:30:14.853 00:30:14.853 ' 00:30:14.853 05:45:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:30:14.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.854 --rc genhtml_branch_coverage=1 00:30:14.854 --rc genhtml_function_coverage=1 00:30:14.854 --rc genhtml_legend=1 00:30:14.854 --rc geninfo_all_blocks=1 00:30:14.854 --rc geninfo_unexecuted_blocks=1 00:30:14.854 00:30:14.854 ' 00:30:14.854 05:45:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:30:14.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.854 --rc genhtml_branch_coverage=1 00:30:14.854 --rc genhtml_function_coverage=1 00:30:14.854 --rc genhtml_legend=1 00:30:14.854 --rc geninfo_all_blocks=1 00:30:14.854 --rc geninfo_unexecuted_blocks=1 00:30:14.854 00:30:14.854 ' 00:30:14.854 05:45:17 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:14.854 05:45:17 -- nvmf/common.sh@7 -- # uname -s 00:30:14.854 05:45:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:14.854 05:45:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:14.854 05:45:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:14.854 05:45:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:14.854 05:45:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:14.854 05:45:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:14.854 05:45:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:14.854 05:45:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:14.854 05:45:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:14.854 05:45:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:14.854 05:45:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:14.854 05:45:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:14.854 05:45:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:14.854 05:45:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:14.854 05:45:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:14.854 05:45:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:14.854 05:45:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:14.854 05:45:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:14.854 05:45:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:14.854 05:45:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.854 05:45:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.854 05:45:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.854 05:45:17 -- paths/export.sh@5 -- # export PATH 00:30:14.854 05:45:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.854 05:45:17 -- nvmf/common.sh@46 -- # : 0 00:30:14.854 05:45:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:14.854 05:45:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:14.854 05:45:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:14.854 05:45:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:14.854 05:45:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:14.854 05:45:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:14.854 05:45:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:14.854 05:45:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:14.854 05:45:17 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:14.854 05:45:17 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:30:14.854 05:45:17 -- host/digest.sh@16 -- # runtime=2 00:30:14.854 05:45:17 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:30:14.854 05:45:17 -- host/digest.sh@132 -- # nvmftestinit 00:30:14.854 05:45:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:14.854 05:45:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:14.854 05:45:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:14.854 05:45:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:14.854 05:45:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:14.854 05:45:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:14.854 05:45:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:14.854 05:45:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.854 05:45:17 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:14.854 05:45:17 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:14.854 05:45:17 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:14.854 05:45:17 -- common/autotest_common.sh@10 -- # set +x 00:30:22.990 05:45:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:22.990 05:45:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:22.990 05:45:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:22.990 05:45:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:22.990 05:45:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:22.990 05:45:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:22.990 05:45:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:22.990 05:45:25 -- nvmf/common.sh@294 -- # net_devs=() 00:30:22.990 05:45:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:22.990 05:45:25 -- nvmf/common.sh@295 -- # e810=() 00:30:22.990 05:45:25 -- nvmf/common.sh@295 -- # local -ga e810 00:30:22.990 05:45:25 -- nvmf/common.sh@296 -- # x722=() 00:30:22.990 05:45:25 -- nvmf/common.sh@296 -- # local -ga x722 00:30:22.990 05:45:25 -- nvmf/common.sh@297 -- # mlx=() 00:30:22.990 05:45:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:22.990 05:45:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:22.990 05:45:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:22.990 05:45:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:22.990 05:45:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:22.990 05:45:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:22.990 05:45:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:22.990 05:45:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:22.990 05:45:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:22.990 05:45:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:22.990 05:45:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:22.990 05:45:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:22.990 05:45:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:22.990 05:45:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:22.990 05:45:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:22.990 05:45:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:22.990 05:45:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:22.990 05:45:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:22.990 05:45:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:22.990 05:45:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:22.990 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:22.990 05:45:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:22.990 05:45:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:22.990 05:45:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:22.990 05:45:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:22.990 05:45:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:22.990 05:45:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:22.990 05:45:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:22.990 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:22.990 05:45:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:22.990 05:45:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:22.990 05:45:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:22.990 05:45:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:22.990 05:45:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:22.990 05:45:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:22.990 05:45:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:22.990 05:45:25 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:22.990 05:45:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:22.990 05:45:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:22.990 05:45:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:22.990 05:45:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:22.990 05:45:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:22.990 Found net devices under 0000:31:00.0: cvl_0_0 00:30:22.990 05:45:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:22.990 05:45:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:22.990 05:45:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:22.990 05:45:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:22.990 05:45:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:22.990 05:45:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:22.990 Found net devices under 0000:31:00.1: cvl_0_1 00:30:22.990 05:45:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:22.990 05:45:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:22.990 05:45:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:22.991 05:45:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:22.991 05:45:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:22.991 05:45:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:22.991 05:45:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:22.991 05:45:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:22.991 05:45:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:22.991 05:45:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:22.991 05:45:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:22.991 05:45:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:22.991 05:45:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:22.991 05:45:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:22.991 05:45:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:22.991 05:45:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:22.991 05:45:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:22.991 05:45:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:22.991 05:45:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:22.991 05:45:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:22.991 05:45:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:22.991 05:45:25 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:22.991 05:45:25 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:22.991 05:45:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:22.991 05:45:25 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:22.991 05:45:25 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:22.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:22.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:30:22.991 00:30:22.991 --- 10.0.0.2 ping statistics --- 00:30:22.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:22.991 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:30:22.991 05:45:25 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:22.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:22.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:30:22.991 00:30:22.991 --- 10.0.0.1 ping statistics --- 00:30:22.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:22.991 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:30:22.991 05:45:25 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:22.991 05:45:25 -- nvmf/common.sh@410 -- # return 0 00:30:22.991 05:45:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:22.991 05:45:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:22.991 05:45:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:22.991 05:45:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:22.991 05:45:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:22.991 05:45:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:22.991 05:45:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:22.991 05:45:25 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:22.991 05:45:25 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:30:22.991 05:45:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:22.991 05:45:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:22.991 05:45:25 -- common/autotest_common.sh@10 -- # set +x 00:30:22.991 ************************************ 00:30:22.991 START TEST nvmf_digest_clean 00:30:22.991 ************************************ 00:30:22.991 05:45:25 -- common/autotest_common.sh@1114 -- # run_digest 00:30:22.991 05:45:25 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:30:22.991 05:45:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:22.991 05:45:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:22.991 05:45:25 -- common/autotest_common.sh@10 -- # set +x 00:30:22.991 05:45:25 -- nvmf/common.sh@469 -- # nvmfpid=2016193 00:30:22.991 05:45:25 -- nvmf/common.sh@470 -- # waitforlisten 2016193 00:30:22.991 05:45:25 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:22.991 05:45:25 -- common/autotest_common.sh@829 -- # '[' -z 2016193 ']' 00:30:22.991 05:45:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:22.991 05:45:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:22.991 05:45:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:22.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:22.991 05:45:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:22.991 05:45:25 -- common/autotest_common.sh@10 -- # set +x 00:30:22.991 [2024-12-07 05:45:25.451974] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:30:22.991 [2024-12-07 05:45:25.452051] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:22.991 EAL: No free 2048 kB hugepages reported on node 1 00:30:22.991 [2024-12-07 05:45:25.525645] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:22.991 [2024-12-07 05:45:25.598375] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:22.991 [2024-12-07 05:45:25.598493] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:22.991 [2024-12-07 05:45:25.598501] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:22.991 [2024-12-07 05:45:25.598513] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:22.991 [2024-12-07 05:45:25.598540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.251 05:45:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:23.251 05:45:26 -- common/autotest_common.sh@862 -- # return 0 00:30:23.251 05:45:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:23.251 05:45:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:23.251 05:45:26 -- common/autotest_common.sh@10 -- # set +x 00:30:23.251 05:45:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:23.251 05:45:26 -- host/digest.sh@120 -- # common_target_config 00:30:23.251 05:45:26 -- host/digest.sh@43 -- # rpc_cmd 00:30:23.251 05:45:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.251 05:45:26 -- common/autotest_common.sh@10 -- # set +x 00:30:23.251 null0 00:30:23.251 [2024-12-07 05:45:26.341700] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:23.251 [2024-12-07 05:45:26.365921] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:23.251 05:45:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.251 05:45:26 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:30:23.251 05:45:26 -- host/digest.sh@77 -- # local rw bs qd 00:30:23.251 05:45:26 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:23.251 05:45:26 -- host/digest.sh@80 -- # rw=randread 00:30:23.251 05:45:26 -- host/digest.sh@80 -- # bs=4096 00:30:23.252 05:45:26 -- host/digest.sh@80 -- # qd=128 00:30:23.252 05:45:26 -- host/digest.sh@82 -- # bperfpid=2016413 00:30:23.252 05:45:26 -- host/digest.sh@83 -- # waitforlisten 2016413 /var/tmp/bperf.sock 00:30:23.252 05:45:26 -- common/autotest_common.sh@829 -- # '[' -z 2016413 ']' 00:30:23.252 05:45:26 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:23.252 05:45:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:23.252 05:45:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:23.252 05:45:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:23.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:23.252 05:45:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:23.252 05:45:26 -- common/autotest_common.sh@10 -- # set +x 00:30:23.252 [2024-12-07 05:45:26.417716] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:30:23.252 [2024-12-07 05:45:26.417762] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2016413 ] 00:30:23.252 EAL: No free 2048 kB hugepages reported on node 1 00:30:23.512 [2024-12-07 05:45:26.497273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.512 [2024-12-07 05:45:26.559495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:24.083 05:45:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:24.083 05:45:27 -- common/autotest_common.sh@862 -- # return 0 00:30:24.083 05:45:27 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:30:24.083 05:45:27 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:30:24.083 05:45:27 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:24.343 05:45:27 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:24.343 05:45:27 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:24.604 nvme0n1 00:30:24.604 05:45:27 -- host/digest.sh@91 -- # bperf_py perform_tests 00:30:24.604 05:45:27 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:24.604 Running I/O for 2 seconds... 00:30:27.148 00:30:27.148 Latency(us) 00:30:27.148 [2024-12-07T04:45:30.388Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:27.148 [2024-12-07T04:45:30.388Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:27.148 nvme0n1 : 2.01 16692.16 65.20 0.00 0.00 7660.14 3140.27 16384.00 00:30:27.148 [2024-12-07T04:45:30.388Z] =================================================================================================================== 00:30:27.148 [2024-12-07T04:45:30.388Z] Total : 16692.16 65.20 0.00 0.00 7660.14 3140.27 16384.00 00:30:27.148 0 00:30:27.148 05:45:29 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:30:27.148 05:45:29 -- host/digest.sh@92 -- # get_accel_stats 00:30:27.148 05:45:29 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:27.148 05:45:29 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:27.148 | select(.opcode=="crc32c") 00:30:27.148 | "\(.module_name) \(.executed)"' 00:30:27.148 05:45:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:27.148 05:45:30 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:30:27.148 05:45:30 -- host/digest.sh@93 -- # exp_module=software 00:30:27.148 05:45:30 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:30:27.148 05:45:30 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:27.148 05:45:30 -- host/digest.sh@97 -- # killprocess 2016413 00:30:27.149 05:45:30 -- common/autotest_common.sh@936 -- # '[' -z 2016413 ']' 00:30:27.149 05:45:30 -- common/autotest_common.sh@940 -- # kill -0 2016413 00:30:27.149 05:45:30 -- common/autotest_common.sh@941 -- # uname 00:30:27.149 05:45:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:27.149 05:45:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2016413 00:30:27.149 05:45:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:27.149 05:45:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:27.149 05:45:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2016413' 00:30:27.149 killing process with pid 2016413 00:30:27.149 05:45:30 -- common/autotest_common.sh@955 -- # kill 2016413 00:30:27.149 Received shutdown signal, test time was about 2.000000 seconds 00:30:27.149 00:30:27.149 Latency(us) 00:30:27.149 [2024-12-07T04:45:30.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:27.149 [2024-12-07T04:45:30.389Z] =================================================================================================================== 00:30:27.149 [2024-12-07T04:45:30.389Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:27.149 05:45:30 -- common/autotest_common.sh@960 -- # wait 2016413 00:30:27.149 05:45:30 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:30:27.149 05:45:30 -- host/digest.sh@77 -- # local rw bs qd 00:30:27.149 05:45:30 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:27.149 05:45:30 -- host/digest.sh@80 -- # rw=randread 00:30:27.149 05:45:30 -- host/digest.sh@80 -- # bs=131072 00:30:27.149 05:45:30 -- host/digest.sh@80 -- # qd=16 00:30:27.149 05:45:30 -- host/digest.sh@82 -- # bperfpid=2017109 00:30:27.149 05:45:30 -- host/digest.sh@83 -- # waitforlisten 2017109 /var/tmp/bperf.sock 00:30:27.149 05:45:30 -- common/autotest_common.sh@829 -- # '[' -z 2017109 ']' 00:30:27.149 05:45:30 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:27.149 05:45:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:27.149 05:45:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:27.149 05:45:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:27.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:27.149 05:45:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:27.149 05:45:30 -- common/autotest_common.sh@10 -- # set +x 00:30:27.149 [2024-12-07 05:45:30.260483] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:30:27.149 [2024-12-07 05:45:30.260537] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2017109 ] 00:30:27.149 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:27.149 Zero copy mechanism will not be used. 00:30:27.149 EAL: No free 2048 kB hugepages reported on node 1 00:30:27.149 [2024-12-07 05:45:30.337990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:27.410 [2024-12-07 05:45:30.388239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:27.982 05:45:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:27.983 05:45:31 -- common/autotest_common.sh@862 -- # return 0 00:30:27.983 05:45:31 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:30:27.983 05:45:31 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:30:27.983 05:45:31 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:28.244 05:45:31 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:28.244 05:45:31 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:28.506 nvme0n1 00:30:28.506 05:45:31 -- host/digest.sh@91 -- # bperf_py perform_tests 00:30:28.506 05:45:31 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:28.506 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:28.506 Zero copy mechanism will not be used. 00:30:28.506 Running I/O for 2 seconds... 00:30:30.419 00:30:30.419 Latency(us) 00:30:30.419 [2024-12-07T04:45:33.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:30.419 [2024-12-07T04:45:33.659Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:30.419 nvme0n1 : 2.04 4561.97 570.25 0.00 0.00 3437.83 593.92 44127.57 00:30:30.419 [2024-12-07T04:45:33.659Z] =================================================================================================================== 00:30:30.419 [2024-12-07T04:45:33.659Z] Total : 4561.97 570.25 0.00 0.00 3437.83 593.92 44127.57 00:30:30.419 0 00:30:30.419 05:45:33 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:30:30.419 05:45:33 -- host/digest.sh@92 -- # get_accel_stats 00:30:30.419 05:45:33 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:30.419 05:45:33 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:30.419 | select(.opcode=="crc32c") 00:30:30.419 | "\(.module_name) \(.executed)"' 00:30:30.419 05:45:33 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:30.680 05:45:33 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:30:30.680 05:45:33 -- host/digest.sh@93 -- # exp_module=software 00:30:30.680 05:45:33 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:30:30.680 05:45:33 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:30.680 05:45:33 -- host/digest.sh@97 -- # killprocess 2017109 00:30:30.680 05:45:33 -- common/autotest_common.sh@936 -- # '[' -z 2017109 ']' 00:30:30.680 05:45:33 -- common/autotest_common.sh@940 -- # kill -0 2017109 00:30:30.680 05:45:33 -- common/autotest_common.sh@941 -- # uname 00:30:30.680 05:45:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:30.680 05:45:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2017109 00:30:30.680 05:45:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:30.680 05:45:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:30.680 05:45:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2017109' 00:30:30.680 killing process with pid 2017109 00:30:30.680 05:45:33 -- common/autotest_common.sh@955 -- # kill 2017109 00:30:30.680 Received shutdown signal, test time was about 2.000000 seconds 00:30:30.680 00:30:30.680 Latency(us) 00:30:30.680 [2024-12-07T04:45:33.920Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:30.680 [2024-12-07T04:45:33.920Z] =================================================================================================================== 00:30:30.680 [2024-12-07T04:45:33.920Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:30.680 05:45:33 -- common/autotest_common.sh@960 -- # wait 2017109 00:30:30.941 05:45:33 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:30:30.941 05:45:33 -- host/digest.sh@77 -- # local rw bs qd 00:30:30.941 05:45:33 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:30.941 05:45:33 -- host/digest.sh@80 -- # rw=randwrite 00:30:30.941 05:45:33 -- host/digest.sh@80 -- # bs=4096 00:30:30.941 05:45:33 -- host/digest.sh@80 -- # qd=128 00:30:30.941 05:45:33 -- host/digest.sh@82 -- # bperfpid=2017821 00:30:30.941 05:45:33 -- host/digest.sh@83 -- # waitforlisten 2017821 /var/tmp/bperf.sock 00:30:30.941 05:45:33 -- common/autotest_common.sh@829 -- # '[' -z 2017821 ']' 00:30:30.941 05:45:33 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:30.941 05:45:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:30.941 05:45:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:30.941 05:45:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:30.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:30.941 05:45:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:30.941 05:45:33 -- common/autotest_common.sh@10 -- # set +x 00:30:30.941 [2024-12-07 05:45:34.043621] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:30:30.941 [2024-12-07 05:45:34.043690] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2017821 ] 00:30:30.941 EAL: No free 2048 kB hugepages reported on node 1 00:30:30.941 [2024-12-07 05:45:34.122982] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:30.941 [2024-12-07 05:45:34.174785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:31.883 05:45:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:31.883 05:45:34 -- common/autotest_common.sh@862 -- # return 0 00:30:31.883 05:45:34 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:30:31.883 05:45:34 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:30:31.883 05:45:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:31.883 05:45:35 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:31.883 05:45:35 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:32.143 nvme0n1 00:30:32.143 05:45:35 -- host/digest.sh@91 -- # bperf_py perform_tests 00:30:32.143 05:45:35 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:32.143 Running I/O for 2 seconds... 00:30:34.687 00:30:34.687 Latency(us) 00:30:34.687 [2024-12-07T04:45:37.927Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:34.687 [2024-12-07T04:45:37.927Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:34.687 nvme0n1 : 2.00 22829.09 89.18 0.00 0.00 5602.39 1815.89 9830.40 00:30:34.687 [2024-12-07T04:45:37.927Z] =================================================================================================================== 00:30:34.687 [2024-12-07T04:45:37.927Z] Total : 22829.09 89.18 0.00 0.00 5602.39 1815.89 9830.40 00:30:34.687 0 00:30:34.687 05:45:37 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:30:34.687 05:45:37 -- host/digest.sh@92 -- # get_accel_stats 00:30:34.687 05:45:37 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:34.687 05:45:37 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:34.687 | select(.opcode=="crc32c") 00:30:34.687 | "\(.module_name) \(.executed)"' 00:30:34.687 05:45:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:34.687 05:45:37 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:30:34.687 05:45:37 -- host/digest.sh@93 -- # exp_module=software 00:30:34.687 05:45:37 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:30:34.687 05:45:37 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:34.687 05:45:37 -- host/digest.sh@97 -- # killprocess 2017821 00:30:34.687 05:45:37 -- common/autotest_common.sh@936 -- # '[' -z 2017821 ']' 00:30:34.687 05:45:37 -- common/autotest_common.sh@940 -- # kill -0 2017821 00:30:34.687 05:45:37 -- common/autotest_common.sh@941 -- # uname 00:30:34.687 05:45:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:34.687 05:45:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2017821 00:30:34.687 05:45:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:34.687 05:45:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:34.687 05:45:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2017821' 00:30:34.687 killing process with pid 2017821 00:30:34.687 05:45:37 -- common/autotest_common.sh@955 -- # kill 2017821 00:30:34.687 Received shutdown signal, test time was about 2.000000 seconds 00:30:34.687 00:30:34.687 Latency(us) 00:30:34.687 [2024-12-07T04:45:37.927Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:34.687 [2024-12-07T04:45:37.927Z] =================================================================================================================== 00:30:34.687 [2024-12-07T04:45:37.927Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:34.687 05:45:37 -- common/autotest_common.sh@960 -- # wait 2017821 00:30:34.687 05:45:37 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:30:34.687 05:45:37 -- host/digest.sh@77 -- # local rw bs qd 00:30:34.687 05:45:37 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:34.687 05:45:37 -- host/digest.sh@80 -- # rw=randwrite 00:30:34.687 05:45:37 -- host/digest.sh@80 -- # bs=131072 00:30:34.687 05:45:37 -- host/digest.sh@80 -- # qd=16 00:30:34.687 05:45:37 -- host/digest.sh@82 -- # bperfpid=2018574 00:30:34.687 05:45:37 -- host/digest.sh@83 -- # waitforlisten 2018574 /var/tmp/bperf.sock 00:30:34.687 05:45:37 -- common/autotest_common.sh@829 -- # '[' -z 2018574 ']' 00:30:34.687 05:45:37 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:34.687 05:45:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:34.687 05:45:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:34.687 05:45:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:34.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:34.687 05:45:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:34.687 05:45:37 -- common/autotest_common.sh@10 -- # set +x 00:30:34.687 [2024-12-07 05:45:37.757740] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:30:34.687 [2024-12-07 05:45:37.757797] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2018574 ] 00:30:34.687 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:34.687 Zero copy mechanism will not be used. 00:30:34.687 EAL: No free 2048 kB hugepages reported on node 1 00:30:34.687 [2024-12-07 05:45:37.834020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.687 [2024-12-07 05:45:37.888443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:35.629 05:45:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:35.629 05:45:38 -- common/autotest_common.sh@862 -- # return 0 00:30:35.629 05:45:38 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:30:35.629 05:45:38 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:30:35.629 05:45:38 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:35.629 05:45:38 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:35.629 05:45:38 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:35.890 nvme0n1 00:30:35.890 05:45:39 -- host/digest.sh@91 -- # bperf_py perform_tests 00:30:35.890 05:45:39 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:36.151 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:36.151 Zero copy mechanism will not be used. 00:30:36.151 Running I/O for 2 seconds... 00:30:38.066 00:30:38.066 Latency(us) 00:30:38.066 [2024-12-07T04:45:41.306Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:38.066 [2024-12-07T04:45:41.306Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:38.066 nvme0n1 : 2.00 5409.70 676.21 0.00 0.00 2953.31 1358.51 12615.68 00:30:38.066 [2024-12-07T04:45:41.306Z] =================================================================================================================== 00:30:38.066 [2024-12-07T04:45:41.306Z] Total : 5409.70 676.21 0.00 0.00 2953.31 1358.51 12615.68 00:30:38.066 0 00:30:38.066 05:45:41 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:30:38.066 05:45:41 -- host/digest.sh@92 -- # get_accel_stats 00:30:38.066 05:45:41 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:38.066 05:45:41 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:38.066 | select(.opcode=="crc32c") 00:30:38.066 | "\(.module_name) \(.executed)"' 00:30:38.066 05:45:41 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:38.327 05:45:41 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:30:38.327 05:45:41 -- host/digest.sh@93 -- # exp_module=software 00:30:38.327 05:45:41 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:30:38.327 05:45:41 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:38.327 05:45:41 -- host/digest.sh@97 -- # killprocess 2018574 00:30:38.327 05:45:41 -- common/autotest_common.sh@936 -- # '[' -z 2018574 ']' 00:30:38.327 05:45:41 -- common/autotest_common.sh@940 -- # kill -0 2018574 00:30:38.327 05:45:41 -- common/autotest_common.sh@941 -- # uname 00:30:38.327 05:45:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:38.327 05:45:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2018574 00:30:38.327 05:45:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:38.327 05:45:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:38.327 05:45:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2018574' 00:30:38.327 killing process with pid 2018574 00:30:38.327 05:45:41 -- common/autotest_common.sh@955 -- # kill 2018574 00:30:38.327 Received shutdown signal, test time was about 2.000000 seconds 00:30:38.327 00:30:38.327 Latency(us) 00:30:38.327 [2024-12-07T04:45:41.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:38.327 [2024-12-07T04:45:41.567Z] =================================================================================================================== 00:30:38.327 [2024-12-07T04:45:41.567Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:38.327 05:45:41 -- common/autotest_common.sh@960 -- # wait 2018574 00:30:38.327 05:45:41 -- host/digest.sh@126 -- # killprocess 2016193 00:30:38.327 05:45:41 -- common/autotest_common.sh@936 -- # '[' -z 2016193 ']' 00:30:38.327 05:45:41 -- common/autotest_common.sh@940 -- # kill -0 2016193 00:30:38.327 05:45:41 -- common/autotest_common.sh@941 -- # uname 00:30:38.327 05:45:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:38.589 05:45:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2016193 00:30:38.589 05:45:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:38.589 05:45:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:38.589 05:45:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2016193' 00:30:38.589 killing process with pid 2016193 00:30:38.589 05:45:41 -- common/autotest_common.sh@955 -- # kill 2016193 00:30:38.589 05:45:41 -- common/autotest_common.sh@960 -- # wait 2016193 00:30:38.589 00:30:38.589 real 0m16.364s 00:30:38.589 user 0m32.037s 00:30:38.589 sys 0m3.689s 00:30:38.589 05:45:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:38.589 05:45:41 -- common/autotest_common.sh@10 -- # set +x 00:30:38.589 ************************************ 00:30:38.589 END TEST nvmf_digest_clean 00:30:38.589 ************************************ 00:30:38.589 05:45:41 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:30:38.589 05:45:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:38.589 05:45:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:38.589 05:45:41 -- common/autotest_common.sh@10 -- # set +x 00:30:38.589 ************************************ 00:30:38.589 START TEST nvmf_digest_error 00:30:38.589 ************************************ 00:30:38.589 05:45:41 -- common/autotest_common.sh@1114 -- # run_digest_error 00:30:38.589 05:45:41 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:30:38.589 05:45:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:38.589 05:45:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:38.589 05:45:41 -- common/autotest_common.sh@10 -- # set +x 00:30:38.589 05:45:41 -- nvmf/common.sh@469 -- # nvmfpid=2019530 00:30:38.589 05:45:41 -- nvmf/common.sh@470 -- # waitforlisten 2019530 00:30:38.589 05:45:41 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:38.589 05:45:41 -- common/autotest_common.sh@829 -- # '[' -z 2019530 ']' 00:30:38.589 05:45:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:38.589 05:45:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:38.589 05:45:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:38.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:38.589 05:45:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:38.589 05:45:41 -- common/autotest_common.sh@10 -- # set +x 00:30:38.850 [2024-12-07 05:45:41.861936] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:30:38.850 [2024-12-07 05:45:41.861993] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:38.850 EAL: No free 2048 kB hugepages reported on node 1 00:30:38.850 [2024-12-07 05:45:41.930482] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.850 [2024-12-07 05:45:41.995146] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:38.850 [2024-12-07 05:45:41.995267] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:38.850 [2024-12-07 05:45:41.995275] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:38.850 [2024-12-07 05:45:41.995283] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:38.850 [2024-12-07 05:45:41.995300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:39.422 05:45:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:39.422 05:45:42 -- common/autotest_common.sh@862 -- # return 0 00:30:39.422 05:45:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:39.422 05:45:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:39.422 05:45:42 -- common/autotest_common.sh@10 -- # set +x 00:30:39.684 05:45:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:39.684 05:45:42 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:30:39.684 05:45:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.684 05:45:42 -- common/autotest_common.sh@10 -- # set +x 00:30:39.684 [2024-12-07 05:45:42.677234] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:30:39.684 05:45:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.684 05:45:42 -- host/digest.sh@104 -- # common_target_config 00:30:39.684 05:45:42 -- host/digest.sh@43 -- # rpc_cmd 00:30:39.684 05:45:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.684 05:45:42 -- common/autotest_common.sh@10 -- # set +x 00:30:39.684 null0 00:30:39.684 [2024-12-07 05:45:42.758111] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:39.684 [2024-12-07 05:45:42.782333] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:39.684 05:45:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.684 05:45:42 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:30:39.684 05:45:42 -- host/digest.sh@54 -- # local rw bs qd 00:30:39.684 05:45:42 -- host/digest.sh@56 -- # rw=randread 00:30:39.684 05:45:42 -- host/digest.sh@56 -- # bs=4096 00:30:39.684 05:45:42 -- host/digest.sh@56 -- # qd=128 00:30:39.684 05:45:42 -- host/digest.sh@58 -- # bperfpid=2019564 00:30:39.684 05:45:42 -- host/digest.sh@60 -- # waitforlisten 2019564 /var/tmp/bperf.sock 00:30:39.684 05:45:42 -- common/autotest_common.sh@829 -- # '[' -z 2019564 ']' 00:30:39.684 05:45:42 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:30:39.684 05:45:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:39.684 05:45:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:39.684 05:45:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:39.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:39.684 05:45:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:39.684 05:45:42 -- common/autotest_common.sh@10 -- # set +x 00:30:39.684 [2024-12-07 05:45:42.833454] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:30:39.684 [2024-12-07 05:45:42.833502] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2019564 ] 00:30:39.684 EAL: No free 2048 kB hugepages reported on node 1 00:30:39.684 [2024-12-07 05:45:42.909041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:39.945 [2024-12-07 05:45:42.961286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:40.520 05:45:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:40.520 05:45:43 -- common/autotest_common.sh@862 -- # return 0 00:30:40.520 05:45:43 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:40.520 05:45:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:40.782 05:45:43 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:40.782 05:45:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.782 05:45:43 -- common/autotest_common.sh@10 -- # set +x 00:30:40.782 05:45:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.782 05:45:43 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:40.782 05:45:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:41.044 nvme0n1 00:30:41.044 05:45:44 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:41.044 05:45:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.044 05:45:44 -- common/autotest_common.sh@10 -- # set +x 00:30:41.044 05:45:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.044 05:45:44 -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:41.044 05:45:44 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:41.044 Running I/O for 2 seconds... 00:30:41.044 [2024-12-07 05:45:44.153184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.044 [2024-12-07 05:45:44.153214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.044 [2024-12-07 05:45:44.153223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.044 [2024-12-07 05:45:44.167629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.044 [2024-12-07 05:45:44.167648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.044 [2024-12-07 05:45:44.167655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.044 [2024-12-07 05:45:44.176316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.044 [2024-12-07 05:45:44.176333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.044 [2024-12-07 05:45:44.176340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.044 [2024-12-07 05:45:44.189961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.044 [2024-12-07 05:45:44.189979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.044 [2024-12-07 05:45:44.189986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.044 [2024-12-07 05:45:44.204612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.044 [2024-12-07 05:45:44.204628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.044 [2024-12-07 05:45:44.204635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.044 [2024-12-07 05:45:44.218879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.044 [2024-12-07 05:45:44.218896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.044 [2024-12-07 05:45:44.218902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.044 [2024-12-07 05:45:44.233686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.044 [2024-12-07 05:45:44.233703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.044 [2024-12-07 05:45:44.233709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.045 [2024-12-07 05:45:44.248237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.045 [2024-12-07 05:45:44.248254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.045 [2024-12-07 05:45:44.248261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.045 [2024-12-07 05:45:44.262220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.045 [2024-12-07 05:45:44.262238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.045 [2024-12-07 05:45:44.262244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.045 [2024-12-07 05:45:44.276860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.045 [2024-12-07 05:45:44.276877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.045 [2024-12-07 05:45:44.276883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.306 [2024-12-07 05:45:44.290871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.306 [2024-12-07 05:45:44.290889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.306 [2024-12-07 05:45:44.290901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.306 [2024-12-07 05:45:44.304239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.306 [2024-12-07 05:45:44.304256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.306 [2024-12-07 05:45:44.304262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.306 [2024-12-07 05:45:44.318652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.306 [2024-12-07 05:45:44.318668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.306 [2024-12-07 05:45:44.318675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.306 [2024-12-07 05:45:44.332358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.306 [2024-12-07 05:45:44.332375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.306 [2024-12-07 05:45:44.332382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.306 [2024-12-07 05:45:44.346136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.306 [2024-12-07 05:45:44.346152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.306 [2024-12-07 05:45:44.346159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.306 [2024-12-07 05:45:44.360001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.306 [2024-12-07 05:45:44.360021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.306 [2024-12-07 05:45:44.360027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.306 [2024-12-07 05:45:44.372897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.306 [2024-12-07 05:45:44.372913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.306 [2024-12-07 05:45:44.372920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.306 [2024-12-07 05:45:44.384104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.306 [2024-12-07 05:45:44.384121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.306 [2024-12-07 05:45:44.384128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.306 [2024-12-07 05:45:44.398185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.306 [2024-12-07 05:45:44.398209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.306 [2024-12-07 05:45:44.398215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.306 [2024-12-07 05:45:44.412777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.306 [2024-12-07 05:45:44.412794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.306 [2024-12-07 05:45:44.412801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.306 [2024-12-07 05:45:44.427259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.306 [2024-12-07 05:45:44.427275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.306 [2024-12-07 05:45:44.427282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.306 [2024-12-07 05:45:44.442166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.306 [2024-12-07 05:45:44.442183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.306 [2024-12-07 05:45:44.442190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.306 [2024-12-07 05:45:44.456807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.306 [2024-12-07 05:45:44.456823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.306 [2024-12-07 05:45:44.456830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.306 [2024-12-07 05:45:44.470548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.306 [2024-12-07 05:45:44.470565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.306 [2024-12-07 05:45:44.470572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.306 [2024-12-07 05:45:44.485759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.307 [2024-12-07 05:45:44.485775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.307 [2024-12-07 05:45:44.485782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.307 [2024-12-07 05:45:44.499620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.307 [2024-12-07 05:45:44.499637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.307 [2024-12-07 05:45:44.499643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.307 [2024-12-07 05:45:44.513281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.307 [2024-12-07 05:45:44.513298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.307 [2024-12-07 05:45:44.513304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.307 [2024-12-07 05:45:44.527662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.307 [2024-12-07 05:45:44.527679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.307 [2024-12-07 05:45:44.527689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.307 [2024-12-07 05:45:44.541990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.307 [2024-12-07 05:45:44.542007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.307 [2024-12-07 05:45:44.542018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.569 [2024-12-07 05:45:44.556627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.569 [2024-12-07 05:45:44.556644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.569 [2024-12-07 05:45:44.556650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.569 [2024-12-07 05:45:44.571333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.569 [2024-12-07 05:45:44.571350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.569 [2024-12-07 05:45:44.571356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.569 [2024-12-07 05:45:44.585830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.569 [2024-12-07 05:45:44.585847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.569 [2024-12-07 05:45:44.585854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.569 [2024-12-07 05:45:44.600677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.569 [2024-12-07 05:45:44.600694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.569 [2024-12-07 05:45:44.600701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.569 [2024-12-07 05:45:44.614918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.569 [2024-12-07 05:45:44.614936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.569 [2024-12-07 05:45:44.614942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.569 [2024-12-07 05:45:44.628633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.569 [2024-12-07 05:45:44.628651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.569 [2024-12-07 05:45:44.628657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.569 [2024-12-07 05:45:44.642198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.569 [2024-12-07 05:45:44.642215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.569 [2024-12-07 05:45:44.642222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.569 [2024-12-07 05:45:44.656983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.569 [2024-12-07 05:45:44.657004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.569 [2024-12-07 05:45:44.657013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.569 [2024-12-07 05:45:44.671749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.569 [2024-12-07 05:45:44.671766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.569 [2024-12-07 05:45:44.671773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.569 [2024-12-07 05:45:44.686084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.569 [2024-12-07 05:45:44.686102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.569 [2024-12-07 05:45:44.686108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.569 [2024-12-07 05:45:44.700525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.569 [2024-12-07 05:45:44.700542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.569 [2024-12-07 05:45:44.700548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.569 [2024-12-07 05:45:44.714841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.569 [2024-12-07 05:45:44.714858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.570 [2024-12-07 05:45:44.714865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.570 [2024-12-07 05:45:44.729533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.570 [2024-12-07 05:45:44.729550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.570 [2024-12-07 05:45:44.729556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.570 [2024-12-07 05:45:44.742572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.570 [2024-12-07 05:45:44.742589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.570 [2024-12-07 05:45:44.742595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.570 [2024-12-07 05:45:44.756826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.570 [2024-12-07 05:45:44.756843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.570 [2024-12-07 05:45:44.756850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.570 [2024-12-07 05:45:44.770573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.570 [2024-12-07 05:45:44.770590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.570 [2024-12-07 05:45:44.770597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.570 [2024-12-07 05:45:44.785174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.570 [2024-12-07 05:45:44.785191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.570 [2024-12-07 05:45:44.785197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.570 [2024-12-07 05:45:44.799525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.570 [2024-12-07 05:45:44.799542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.570 [2024-12-07 05:45:44.799548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.832 [2024-12-07 05:45:44.814636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.832 [2024-12-07 05:45:44.814653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.832 [2024-12-07 05:45:44.814660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.832 [2024-12-07 05:45:44.829597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.832 [2024-12-07 05:45:44.829614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.832 [2024-12-07 05:45:44.829620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.832 [2024-12-07 05:45:44.843478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.832 [2024-12-07 05:45:44.843495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.832 [2024-12-07 05:45:44.843501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.832 [2024-12-07 05:45:44.858599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.832 [2024-12-07 05:45:44.858616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.832 [2024-12-07 05:45:44.858623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.832 [2024-12-07 05:45:44.872732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.832 [2024-12-07 05:45:44.872749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.832 [2024-12-07 05:45:44.872755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.832 [2024-12-07 05:45:44.887462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.832 [2024-12-07 05:45:44.887479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.832 [2024-12-07 05:45:44.887486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.832 [2024-12-07 05:45:44.902110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.832 [2024-12-07 05:45:44.902127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.832 [2024-12-07 05:45:44.902137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.832 [2024-12-07 05:45:44.916659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.832 [2024-12-07 05:45:44.916676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.832 [2024-12-07 05:45:44.916682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.832 [2024-12-07 05:45:44.931310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.832 [2024-12-07 05:45:44.931327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.832 [2024-12-07 05:45:44.931333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.832 [2024-12-07 05:45:44.945815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.832 [2024-12-07 05:45:44.945832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.832 [2024-12-07 05:45:44.945838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.832 [2024-12-07 05:45:44.960297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.832 [2024-12-07 05:45:44.960314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.832 [2024-12-07 05:45:44.960320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.832 [2024-12-07 05:45:44.975005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.832 [2024-12-07 05:45:44.975027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.833 [2024-12-07 05:45:44.975033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.833 [2024-12-07 05:45:44.989443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.833 [2024-12-07 05:45:44.989460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.833 [2024-12-07 05:45:44.989466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.833 [2024-12-07 05:45:45.004034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.833 [2024-12-07 05:45:45.004051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.833 [2024-12-07 05:45:45.004058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.833 [2024-12-07 05:45:45.018650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.833 [2024-12-07 05:45:45.018668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.833 [2024-12-07 05:45:45.018674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.833 [2024-12-07 05:45:45.033030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.833 [2024-12-07 05:45:45.033051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.833 [2024-12-07 05:45:45.033057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.833 [2024-12-07 05:45:45.047581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.833 [2024-12-07 05:45:45.047597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.833 [2024-12-07 05:45:45.047604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.833 [2024-12-07 05:45:45.059681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:41.833 [2024-12-07 05:45:45.059697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.833 [2024-12-07 05:45:45.059704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.095 [2024-12-07 05:45:45.073592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.095 [2024-12-07 05:45:45.073609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.095 [2024-12-07 05:45:45.073616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.095 [2024-12-07 05:45:45.086490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.095 [2024-12-07 05:45:45.086507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.095 [2024-12-07 05:45:45.086513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.095 [2024-12-07 05:45:45.101876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.095 [2024-12-07 05:45:45.101894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.095 [2024-12-07 05:45:45.101900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.095 [2024-12-07 05:45:45.117099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.095 [2024-12-07 05:45:45.117116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.095 [2024-12-07 05:45:45.117122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.095 [2024-12-07 05:45:45.131634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.095 [2024-12-07 05:45:45.131651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.095 [2024-12-07 05:45:45.131657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.095 [2024-12-07 05:45:45.146065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.095 [2024-12-07 05:45:45.146082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.095 [2024-12-07 05:45:45.146088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.095 [2024-12-07 05:45:45.160697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.095 [2024-12-07 05:45:45.160714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.095 [2024-12-07 05:45:45.160721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.095 [2024-12-07 05:45:45.170310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.095 [2024-12-07 05:45:45.170326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.095 [2024-12-07 05:45:45.170332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.095 [2024-12-07 05:45:45.181969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.095 [2024-12-07 05:45:45.181986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.095 [2024-12-07 05:45:45.181992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.095 [2024-12-07 05:45:45.196654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.095 [2024-12-07 05:45:45.196670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.095 [2024-12-07 05:45:45.196677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.095 [2024-12-07 05:45:45.211023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.095 [2024-12-07 05:45:45.211040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.095 [2024-12-07 05:45:45.211047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.095 [2024-12-07 05:45:45.225409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.095 [2024-12-07 05:45:45.225425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.095 [2024-12-07 05:45:45.225432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.095 [2024-12-07 05:45:45.245244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.095 [2024-12-07 05:45:45.245261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.095 [2024-12-07 05:45:45.245268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.095 [2024-12-07 05:45:45.260062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.095 [2024-12-07 05:45:45.260079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.095 [2024-12-07 05:45:45.260086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.095 [2024-12-07 05:45:45.274159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.096 [2024-12-07 05:45:45.274179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.096 [2024-12-07 05:45:45.274185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.096 [2024-12-07 05:45:45.289162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.096 [2024-12-07 05:45:45.289179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.096 [2024-12-07 05:45:45.289186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.096 [2024-12-07 05:45:45.303257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.096 [2024-12-07 05:45:45.303273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.096 [2024-12-07 05:45:45.303279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.096 [2024-12-07 05:45:45.317006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.096 [2024-12-07 05:45:45.317026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.096 [2024-12-07 05:45:45.317032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.096 [2024-12-07 05:45:45.326199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.096 [2024-12-07 05:45:45.326216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.096 [2024-12-07 05:45:45.326222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.357 [2024-12-07 05:45:45.339888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.357 [2024-12-07 05:45:45.339905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.357 [2024-12-07 05:45:45.339912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.357 [2024-12-07 05:45:45.353526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.357 [2024-12-07 05:45:45.353543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.357 [2024-12-07 05:45:45.353549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.357 [2024-12-07 05:45:45.366424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.357 [2024-12-07 05:45:45.366441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.357 [2024-12-07 05:45:45.366447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.357 [2024-12-07 05:45:45.380326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.357 [2024-12-07 05:45:45.380343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.357 [2024-12-07 05:45:45.380349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.357 [2024-12-07 05:45:45.395149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.357 [2024-12-07 05:45:45.395166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.357 [2024-12-07 05:45:45.395172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.357 [2024-12-07 05:45:45.408896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.357 [2024-12-07 05:45:45.408913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.357 [2024-12-07 05:45:45.408920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.357 [2024-12-07 05:45:45.424068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.357 [2024-12-07 05:45:45.424084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.357 [2024-12-07 05:45:45.424091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.357 [2024-12-07 05:45:45.436336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.357 [2024-12-07 05:45:45.436352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.357 [2024-12-07 05:45:45.436358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.357 [2024-12-07 05:45:45.449283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.357 [2024-12-07 05:45:45.449299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.357 [2024-12-07 05:45:45.449306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.357 [2024-12-07 05:45:45.464145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.357 [2024-12-07 05:45:45.464162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.357 [2024-12-07 05:45:45.464168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.357 [2024-12-07 05:45:45.478447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.357 [2024-12-07 05:45:45.478463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.357 [2024-12-07 05:45:45.478469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.357 [2024-12-07 05:45:45.492933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.357 [2024-12-07 05:45:45.492950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.357 [2024-12-07 05:45:45.492956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.357 [2024-12-07 05:45:45.507799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.357 [2024-12-07 05:45:45.507816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.357 [2024-12-07 05:45:45.507826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.357 [2024-12-07 05:45:45.522831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.357 [2024-12-07 05:45:45.522847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.357 [2024-12-07 05:45:45.522854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.357 [2024-12-07 05:45:45.536643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.357 [2024-12-07 05:45:45.536659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.357 [2024-12-07 05:45:45.536665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.357 [2024-12-07 05:45:45.550168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.357 [2024-12-07 05:45:45.550184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.357 [2024-12-07 05:45:45.550191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.357 [2024-12-07 05:45:45.564490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.357 [2024-12-07 05:45:45.564507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.357 [2024-12-07 05:45:45.564513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.357 [2024-12-07 05:45:45.578301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.357 [2024-12-07 05:45:45.578317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.357 [2024-12-07 05:45:45.578324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.357 [2024-12-07 05:45:45.588464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.357 [2024-12-07 05:45:45.588480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.357 [2024-12-07 05:45:45.588486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.618 [2024-12-07 05:45:45.600710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.618 [2024-12-07 05:45:45.600726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.618 [2024-12-07 05:45:45.600733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.618 [2024-12-07 05:45:45.611330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.618 [2024-12-07 05:45:45.611346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.618 [2024-12-07 05:45:45.611353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.618 [2024-12-07 05:45:45.622916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.618 [2024-12-07 05:45:45.622936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.618 [2024-12-07 05:45:45.622942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.618 [2024-12-07 05:45:45.634030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.618 [2024-12-07 05:45:45.634046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.618 [2024-12-07 05:45:45.634052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.618 [2024-12-07 05:45:45.645233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.618 [2024-12-07 05:45:45.645250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.618 [2024-12-07 05:45:45.645256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.618 [2024-12-07 05:45:45.658278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.618 [2024-12-07 05:45:45.658294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.618 [2024-12-07 05:45:45.658300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.618 [2024-12-07 05:45:45.672058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.618 [2024-12-07 05:45:45.672075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.618 [2024-12-07 05:45:45.672081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.618 [2024-12-07 05:45:45.687310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.618 [2024-12-07 05:45:45.687326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.618 [2024-12-07 05:45:45.687332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.618 [2024-12-07 05:45:45.701717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.618 [2024-12-07 05:45:45.701734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.618 [2024-12-07 05:45:45.701740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.618 [2024-12-07 05:45:45.716260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.618 [2024-12-07 05:45:45.716276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.618 [2024-12-07 05:45:45.716283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.618 [2024-12-07 05:45:45.730085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.618 [2024-12-07 05:45:45.730101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.618 [2024-12-07 05:45:45.730107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.618 [2024-12-07 05:45:45.745026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.618 [2024-12-07 05:45:45.745042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.618 [2024-12-07 05:45:45.745048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.618 [2024-12-07 05:45:45.759285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.618 [2024-12-07 05:45:45.759301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.618 [2024-12-07 05:45:45.759308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.618 [2024-12-07 05:45:45.773433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.618 [2024-12-07 05:45:45.773449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.618 [2024-12-07 05:45:45.773455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.618 [2024-12-07 05:45:45.788065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.618 [2024-12-07 05:45:45.788081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.618 [2024-12-07 05:45:45.788088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.618 [2024-12-07 05:45:45.802058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.618 [2024-12-07 05:45:45.802075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.618 [2024-12-07 05:45:45.802081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.618 [2024-12-07 05:45:45.816417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.618 [2024-12-07 05:45:45.816433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.618 [2024-12-07 05:45:45.816440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.618 [2024-12-07 05:45:45.830900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.618 [2024-12-07 05:45:45.830917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.618 [2024-12-07 05:45:45.830923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.618 [2024-12-07 05:45:45.844980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.618 [2024-12-07 05:45:45.844996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.618 [2024-12-07 05:45:45.845002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.879 [2024-12-07 05:45:45.858512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.879 [2024-12-07 05:45:45.858529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.879 [2024-12-07 05:45:45.858539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.879 [2024-12-07 05:45:45.873203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.879 [2024-12-07 05:45:45.873220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.879 [2024-12-07 05:45:45.873226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.879 [2024-12-07 05:45:45.888354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.879 [2024-12-07 05:45:45.888370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.879 [2024-12-07 05:45:45.888377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.879 [2024-12-07 05:45:45.903105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.879 [2024-12-07 05:45:45.903121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.879 [2024-12-07 05:45:45.903127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.879 [2024-12-07 05:45:45.917798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.879 [2024-12-07 05:45:45.917814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.879 [2024-12-07 05:45:45.917820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.879 [2024-12-07 05:45:45.932250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.879 [2024-12-07 05:45:45.932267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.879 [2024-12-07 05:45:45.932273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.879 [2024-12-07 05:45:45.945633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.879 [2024-12-07 05:45:45.945650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.879 [2024-12-07 05:45:45.945656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.879 [2024-12-07 05:45:45.959337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.879 [2024-12-07 05:45:45.959353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.879 [2024-12-07 05:45:45.959359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.879 [2024-12-07 05:45:45.974140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.879 [2024-12-07 05:45:45.974157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.879 [2024-12-07 05:45:45.974163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.879 [2024-12-07 05:45:45.988709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.879 [2024-12-07 05:45:45.988726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.879 [2024-12-07 05:45:45.988733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.879 [2024-12-07 05:45:46.002850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.879 [2024-12-07 05:45:46.002867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.879 [2024-12-07 05:45:46.002873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.879 [2024-12-07 05:45:46.017389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.879 [2024-12-07 05:45:46.017405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.879 [2024-12-07 05:45:46.017412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.879 [2024-12-07 05:45:46.032072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.879 [2024-12-07 05:45:46.032089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.879 [2024-12-07 05:45:46.032095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.879 [2024-12-07 05:45:46.046496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.879 [2024-12-07 05:45:46.046512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.879 [2024-12-07 05:45:46.046519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.879 [2024-12-07 05:45:46.060876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.879 [2024-12-07 05:45:46.060893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.879 [2024-12-07 05:45:46.060899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.879 [2024-12-07 05:45:46.074661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.879 [2024-12-07 05:45:46.074679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.879 [2024-12-07 05:45:46.074685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.879 [2024-12-07 05:45:46.089288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.879 [2024-12-07 05:45:46.089305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.879 [2024-12-07 05:45:46.089311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.879 [2024-12-07 05:45:46.104034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:42.879 [2024-12-07 05:45:46.104051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.879 [2024-12-07 05:45:46.104063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.140 [2024-12-07 05:45:46.118565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:43.140 [2024-12-07 05:45:46.118581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.140 [2024-12-07 05:45:46.118588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.140 [2024-12-07 05:45:46.132797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216c350) 00:30:43.140 [2024-12-07 05:45:46.132814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.140 [2024-12-07 05:45:46.132820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.140 00:30:43.140 Latency(us) 00:30:43.140 [2024-12-07T04:45:46.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:43.140 [2024-12-07T04:45:46.380Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:43.140 nvme0n1 : 2.05 17748.56 69.33 0.00 0.00 7063.71 1884.16 48059.73 00:30:43.140 [2024-12-07T04:45:46.380Z] =================================================================================================================== 00:30:43.140 [2024-12-07T04:45:46.380Z] Total : 17748.56 69.33 0.00 0.00 7063.71 1884.16 48059.73 00:30:43.140 0 00:30:43.140 05:45:46 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:43.140 05:45:46 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:43.140 05:45:46 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:43.140 | .driver_specific 00:30:43.140 | .nvme_error 00:30:43.140 | .status_code 00:30:43.140 | .command_transient_transport_error' 00:30:43.140 05:45:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:43.140 05:45:46 -- host/digest.sh@71 -- # (( 142 > 0 )) 00:30:43.140 05:45:46 -- host/digest.sh@73 -- # killprocess 2019564 00:30:43.140 05:45:46 -- common/autotest_common.sh@936 -- # '[' -z 2019564 ']' 00:30:43.140 05:45:46 -- common/autotest_common.sh@940 -- # kill -0 2019564 00:30:43.140 05:45:46 -- common/autotest_common.sh@941 -- # uname 00:30:43.140 05:45:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:43.140 05:45:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2019564 00:30:43.401 05:45:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:43.401 05:45:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:43.401 05:45:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2019564' 00:30:43.401 killing process with pid 2019564 00:30:43.401 05:45:46 -- common/autotest_common.sh@955 -- # kill 2019564 00:30:43.401 Received shutdown signal, test time was about 2.000000 seconds 00:30:43.401 00:30:43.401 Latency(us) 00:30:43.401 [2024-12-07T04:45:46.641Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:43.401 [2024-12-07T04:45:46.641Z] =================================================================================================================== 00:30:43.401 [2024-12-07T04:45:46.641Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:43.401 05:45:46 -- common/autotest_common.sh@960 -- # wait 2019564 00:30:43.401 05:45:46 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:30:43.401 05:45:46 -- host/digest.sh@54 -- # local rw bs qd 00:30:43.401 05:45:46 -- host/digest.sh@56 -- # rw=randread 00:30:43.401 05:45:46 -- host/digest.sh@56 -- # bs=131072 00:30:43.401 05:45:46 -- host/digest.sh@56 -- # qd=16 00:30:43.401 05:45:46 -- host/digest.sh@58 -- # bperfpid=2020336 00:30:43.401 05:45:46 -- host/digest.sh@60 -- # waitforlisten 2020336 /var/tmp/bperf.sock 00:30:43.401 05:45:46 -- common/autotest_common.sh@829 -- # '[' -z 2020336 ']' 00:30:43.401 05:45:46 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:30:43.401 05:45:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:43.401 05:45:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:43.401 05:45:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:43.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:43.401 05:45:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:43.401 05:45:46 -- common/autotest_common.sh@10 -- # set +x 00:30:43.401 [2024-12-07 05:45:46.593646] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:30:43.401 [2024-12-07 05:45:46.593704] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2020336 ] 00:30:43.401 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:43.401 Zero copy mechanism will not be used. 00:30:43.401 EAL: No free 2048 kB hugepages reported on node 1 00:30:43.674 [2024-12-07 05:45:46.669851] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:43.674 [2024-12-07 05:45:46.721505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:44.244 05:45:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:44.244 05:45:47 -- common/autotest_common.sh@862 -- # return 0 00:30:44.244 05:45:47 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:44.244 05:45:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:44.504 05:45:47 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:44.504 05:45:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.505 05:45:47 -- common/autotest_common.sh@10 -- # set +x 00:30:44.505 05:45:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.505 05:45:47 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:44.505 05:45:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:44.765 nvme0n1 00:30:44.765 05:45:47 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:44.765 05:45:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.765 05:45:47 -- common/autotest_common.sh@10 -- # set +x 00:30:44.765 05:45:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.766 05:45:47 -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:44.766 05:45:47 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:45.028 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:45.028 Zero copy mechanism will not be used. 00:30:45.028 Running I/O for 2 seconds... 00:30:45.028 [2024-12-07 05:45:48.019059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.029 [2024-12-07 05:45:48.019090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.029 [2024-12-07 05:45:48.019098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.029 [2024-12-07 05:45:48.027406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.029 [2024-12-07 05:45:48.027426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.029 [2024-12-07 05:45:48.027433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.029 [2024-12-07 05:45:48.036741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.029 [2024-12-07 05:45:48.036765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.029 [2024-12-07 05:45:48.036772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.029 [2024-12-07 05:45:48.044592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.029 [2024-12-07 05:45:48.044610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.029 [2024-12-07 05:45:48.044617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.029 [2024-12-07 05:45:48.054170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.029 [2024-12-07 05:45:48.054187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.029 [2024-12-07 05:45:48.054194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.029 [2024-12-07 05:45:48.064307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.029 [2024-12-07 05:45:48.064325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.029 [2024-12-07 05:45:48.064331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.029 [2024-12-07 05:45:48.074428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.029 [2024-12-07 05:45:48.074446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.029 [2024-12-07 05:45:48.074453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.029 [2024-12-07 05:45:48.083535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.029 [2024-12-07 05:45:48.083553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.029 [2024-12-07 05:45:48.083559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.029 [2024-12-07 05:45:48.091539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.029 [2024-12-07 05:45:48.091557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.029 [2024-12-07 05:45:48.091563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.029 [2024-12-07 05:45:48.098683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.029 [2024-12-07 05:45:48.098701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.029 [2024-12-07 05:45:48.098708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.029 [2024-12-07 05:45:48.104865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.029 [2024-12-07 05:45:48.104884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.029 [2024-12-07 05:45:48.104890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.029 [2024-12-07 05:45:48.116038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.029 [2024-12-07 05:45:48.116056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.029 [2024-12-07 05:45:48.116063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.029 [2024-12-07 05:45:48.126671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.029 [2024-12-07 05:45:48.126690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.029 [2024-12-07 05:45:48.126696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.029 [2024-12-07 05:45:48.132882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.029 [2024-12-07 05:45:48.132900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.029 [2024-12-07 05:45:48.132906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.029 [2024-12-07 05:45:48.143817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.029 [2024-12-07 05:45:48.143836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.029 [2024-12-07 05:45:48.143842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.029 [2024-12-07 05:45:48.149550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.029 [2024-12-07 05:45:48.149569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.029 [2024-12-07 05:45:48.149575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.029 [2024-12-07 05:45:48.156264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.029 [2024-12-07 05:45:48.156282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.029 [2024-12-07 05:45:48.156288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.029 [2024-12-07 05:45:48.160766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.029 [2024-12-07 05:45:48.160784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.029 [2024-12-07 05:45:48.160790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.029 [2024-12-07 05:45:48.169766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.029 [2024-12-07 05:45:48.169785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.029 [2024-12-07 05:45:48.169791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.029 [2024-12-07 05:45:48.179008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.029 [2024-12-07 05:45:48.179031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.029 [2024-12-07 05:45:48.179041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.029 [2024-12-07 05:45:48.184929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.029 [2024-12-07 05:45:48.184947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.029 [2024-12-07 05:45:48.184954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.029 [2024-12-07 05:45:48.189184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.029 [2024-12-07 05:45:48.189202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.029 [2024-12-07 05:45:48.189208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.029 [2024-12-07 05:45:48.193591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.029 [2024-12-07 05:45:48.193608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.029 [2024-12-07 05:45:48.193615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.029 [2024-12-07 05:45:48.201202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.029 [2024-12-07 05:45:48.201219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.029 [2024-12-07 05:45:48.201225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.029 [2024-12-07 05:45:48.207890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.029 [2024-12-07 05:45:48.207908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.030 [2024-12-07 05:45:48.207914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.030 [2024-12-07 05:45:48.216620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.030 [2024-12-07 05:45:48.216638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.030 [2024-12-07 05:45:48.216644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.030 [2024-12-07 05:45:48.221915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.030 [2024-12-07 05:45:48.221932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.030 [2024-12-07 05:45:48.221939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.030 [2024-12-07 05:45:48.229997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.030 [2024-12-07 05:45:48.230019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.030 [2024-12-07 05:45:48.230026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.030 [2024-12-07 05:45:48.239651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.030 [2024-12-07 05:45:48.239670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.030 [2024-12-07 05:45:48.239676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.030 [2024-12-07 05:45:48.248103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.030 [2024-12-07 05:45:48.248121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.030 [2024-12-07 05:45:48.248127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.030 [2024-12-07 05:45:48.257289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.030 [2024-12-07 05:45:48.257308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.030 [2024-12-07 05:45:48.257315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.292 [2024-12-07 05:45:48.265661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.292 [2024-12-07 05:45:48.265680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.292 [2024-12-07 05:45:48.265687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.292 [2024-12-07 05:45:48.274887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.292 [2024-12-07 05:45:48.274904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.292 [2024-12-07 05:45:48.274911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.292 [2024-12-07 05:45:48.283435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.292 [2024-12-07 05:45:48.283452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.292 [2024-12-07 05:45:48.283459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.292 [2024-12-07 05:45:48.290908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.292 [2024-12-07 05:45:48.290926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.292 [2024-12-07 05:45:48.290932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.293 [2024-12-07 05:45:48.300323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.293 [2024-12-07 05:45:48.300340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.293 [2024-12-07 05:45:48.300346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.293 [2024-12-07 05:45:48.309703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.293 [2024-12-07 05:45:48.309721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.293 [2024-12-07 05:45:48.309730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.293 [2024-12-07 05:45:48.316291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.293 [2024-12-07 05:45:48.316309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.293 [2024-12-07 05:45:48.316315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.293 [2024-12-07 05:45:48.319782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.293 [2024-12-07 05:45:48.319800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.293 [2024-12-07 05:45:48.319806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.293 [2024-12-07 05:45:48.328410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.293 [2024-12-07 05:45:48.328428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.293 [2024-12-07 05:45:48.328434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.293 [2024-12-07 05:45:48.337486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.293 [2024-12-07 05:45:48.337504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.293 [2024-12-07 05:45:48.337510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.293 [2024-12-07 05:45:48.344760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.293 [2024-12-07 05:45:48.344778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.293 [2024-12-07 05:45:48.344784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.293 [2024-12-07 05:45:48.354758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.293 [2024-12-07 05:45:48.354776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.293 [2024-12-07 05:45:48.354783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.293 [2024-12-07 05:45:48.365320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.293 [2024-12-07 05:45:48.365338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.293 [2024-12-07 05:45:48.365344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.293 [2024-12-07 05:45:48.376492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.293 [2024-12-07 05:45:48.376509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.293 [2024-12-07 05:45:48.376516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.293 [2024-12-07 05:45:48.386195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.293 [2024-12-07 05:45:48.386216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.293 [2024-12-07 05:45:48.386223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.293 [2024-12-07 05:45:48.398594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.293 [2024-12-07 05:45:48.398611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.293 [2024-12-07 05:45:48.398617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.293 [2024-12-07 05:45:48.407616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.293 [2024-12-07 05:45:48.407634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.293 [2024-12-07 05:45:48.407640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.293 [2024-12-07 05:45:48.415311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.293 [2024-12-07 05:45:48.415329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.293 [2024-12-07 05:45:48.415335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.293 [2024-12-07 05:45:48.427134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.293 [2024-12-07 05:45:48.427152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.293 [2024-12-07 05:45:48.427159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.293 [2024-12-07 05:45:48.437273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.293 [2024-12-07 05:45:48.437291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.293 [2024-12-07 05:45:48.437298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.293 [2024-12-07 05:45:48.447667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.293 [2024-12-07 05:45:48.447685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.293 [2024-12-07 05:45:48.447692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.293 [2024-12-07 05:45:48.456119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.293 [2024-12-07 05:45:48.456137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.293 [2024-12-07 05:45:48.456143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.293 [2024-12-07 05:45:48.466760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.293 [2024-12-07 05:45:48.466779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.293 [2024-12-07 05:45:48.466785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.293 [2024-12-07 05:45:48.478198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.293 [2024-12-07 05:45:48.478216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.293 [2024-12-07 05:45:48.478222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.293 [2024-12-07 05:45:48.487213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.293 [2024-12-07 05:45:48.487231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.293 [2024-12-07 05:45:48.487237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.293 [2024-12-07 05:45:48.498142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.293 [2024-12-07 05:45:48.498161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.293 [2024-12-07 05:45:48.498167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.293 [2024-12-07 05:45:48.509019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.293 [2024-12-07 05:45:48.509037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.293 [2024-12-07 05:45:48.509044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.293 [2024-12-07 05:45:48.520513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.293 [2024-12-07 05:45:48.520530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.293 [2024-12-07 05:45:48.520537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.556 [2024-12-07 05:45:48.531171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.556 [2024-12-07 05:45:48.531188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.556 [2024-12-07 05:45:48.531194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.556 [2024-12-07 05:45:48.542103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.556 [2024-12-07 05:45:48.542121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.556 [2024-12-07 05:45:48.542128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.556 [2024-12-07 05:45:48.553742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.556 [2024-12-07 05:45:48.553759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.556 [2024-12-07 05:45:48.553767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.557 [2024-12-07 05:45:48.564543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.557 [2024-12-07 05:45:48.564562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.557 [2024-12-07 05:45:48.564572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.557 [2024-12-07 05:45:48.572355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.557 [2024-12-07 05:45:48.572373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.557 [2024-12-07 05:45:48.572379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.557 [2024-12-07 05:45:48.581379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.557 [2024-12-07 05:45:48.581396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.557 [2024-12-07 05:45:48.581403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.557 [2024-12-07 05:45:48.590227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.557 [2024-12-07 05:45:48.590244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.557 [2024-12-07 05:45:48.590250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.557 [2024-12-07 05:45:48.597770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.557 [2024-12-07 05:45:48.597788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.557 [2024-12-07 05:45:48.597794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.557 [2024-12-07 05:45:48.605605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.557 [2024-12-07 05:45:48.605623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.557 [2024-12-07 05:45:48.605629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.557 [2024-12-07 05:45:48.613874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.557 [2024-12-07 05:45:48.613892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.557 [2024-12-07 05:45:48.613898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.557 [2024-12-07 05:45:48.624710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.557 [2024-12-07 05:45:48.624728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.557 [2024-12-07 05:45:48.624735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.557 [2024-12-07 05:45:48.632829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.557 [2024-12-07 05:45:48.632847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.557 [2024-12-07 05:45:48.632853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.557 [2024-12-07 05:45:48.638183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.557 [2024-12-07 05:45:48.638204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.557 [2024-12-07 05:45:48.638210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.557 [2024-12-07 05:45:48.642481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.557 [2024-12-07 05:45:48.642498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.557 [2024-12-07 05:45:48.642504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.557 [2024-12-07 05:45:48.646767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.557 [2024-12-07 05:45:48.646784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.557 [2024-12-07 05:45:48.646790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.557 [2024-12-07 05:45:48.654146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.557 [2024-12-07 05:45:48.654163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.557 [2024-12-07 05:45:48.654169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.557 [2024-12-07 05:45:48.663514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.557 [2024-12-07 05:45:48.663531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.557 [2024-12-07 05:45:48.663537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.557 [2024-12-07 05:45:48.672852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.557 [2024-12-07 05:45:48.672870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.557 [2024-12-07 05:45:48.672876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.557 [2024-12-07 05:45:48.681867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.557 [2024-12-07 05:45:48.681885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.557 [2024-12-07 05:45:48.681891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.557 [2024-12-07 05:45:48.689306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.557 [2024-12-07 05:45:48.689324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.557 [2024-12-07 05:45:48.689330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.557 [2024-12-07 05:45:48.697779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.557 [2024-12-07 05:45:48.697797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.557 [2024-12-07 05:45:48.697804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.557 [2024-12-07 05:45:48.707643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.557 [2024-12-07 05:45:48.707661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.557 [2024-12-07 05:45:48.707668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.557 [2024-12-07 05:45:48.717178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.557 [2024-12-07 05:45:48.717196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.557 [2024-12-07 05:45:48.717202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.557 [2024-12-07 05:45:48.726165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.557 [2024-12-07 05:45:48.726183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.557 [2024-12-07 05:45:48.726189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.557 [2024-12-07 05:45:48.737305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.557 [2024-12-07 05:45:48.737323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.557 [2024-12-07 05:45:48.737329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.557 [2024-12-07 05:45:48.748638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.557 [2024-12-07 05:45:48.748656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.557 [2024-12-07 05:45:48.748662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.557 [2024-12-07 05:45:48.758650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.557 [2024-12-07 05:45:48.758668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.557 [2024-12-07 05:45:48.758675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.557 [2024-12-07 05:45:48.768689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.557 [2024-12-07 05:45:48.768706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.557 [2024-12-07 05:45:48.768712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.557 [2024-12-07 05:45:48.780310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.557 [2024-12-07 05:45:48.780328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.557 [2024-12-07 05:45:48.780334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.557 [2024-12-07 05:45:48.790418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.557 [2024-12-07 05:45:48.790435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.558 [2024-12-07 05:45:48.790445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.820 [2024-12-07 05:45:48.800567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.820 [2024-12-07 05:45:48.800584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.820 [2024-12-07 05:45:48.800590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.820 [2024-12-07 05:45:48.809491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.820 [2024-12-07 05:45:48.809509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.820 [2024-12-07 05:45:48.809515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.820 [2024-12-07 05:45:48.819262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.820 [2024-12-07 05:45:48.819279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.820 [2024-12-07 05:45:48.819286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.820 [2024-12-07 05:45:48.827051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.820 [2024-12-07 05:45:48.827069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.820 [2024-12-07 05:45:48.827075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.820 [2024-12-07 05:45:48.835494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.821 [2024-12-07 05:45:48.835512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.821 [2024-12-07 05:45:48.835518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.821 [2024-12-07 05:45:48.843960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.821 [2024-12-07 05:45:48.843977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.821 [2024-12-07 05:45:48.843983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.821 [2024-12-07 05:45:48.854361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.821 [2024-12-07 05:45:48.854380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.821 [2024-12-07 05:45:48.854386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.821 [2024-12-07 05:45:48.863893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.821 [2024-12-07 05:45:48.863910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.821 [2024-12-07 05:45:48.863916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.821 [2024-12-07 05:45:48.870865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.821 [2024-12-07 05:45:48.870886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.821 [2024-12-07 05:45:48.870893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.821 [2024-12-07 05:45:48.874694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.821 [2024-12-07 05:45:48.874712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.821 [2024-12-07 05:45:48.874718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.821 [2024-12-07 05:45:48.878826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.821 [2024-12-07 05:45:48.878844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.821 [2024-12-07 05:45:48.878852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.821 [2024-12-07 05:45:48.886338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.821 [2024-12-07 05:45:48.886355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.821 [2024-12-07 05:45:48.886361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.821 [2024-12-07 05:45:48.894881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.821 [2024-12-07 05:45:48.894898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.821 [2024-12-07 05:45:48.894905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.821 [2024-12-07 05:45:48.902100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.821 [2024-12-07 05:45:48.902117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.821 [2024-12-07 05:45:48.902123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.821 [2024-12-07 05:45:48.910644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.821 [2024-12-07 05:45:48.910661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.821 [2024-12-07 05:45:48.910667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.821 [2024-12-07 05:45:48.921337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.821 [2024-12-07 05:45:48.921354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.821 [2024-12-07 05:45:48.921360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.821 [2024-12-07 05:45:48.932007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.821 [2024-12-07 05:45:48.932029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.821 [2024-12-07 05:45:48.932039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.821 [2024-12-07 05:45:48.942335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.821 [2024-12-07 05:45:48.942352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.821 [2024-12-07 05:45:48.942358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.821 [2024-12-07 05:45:48.952948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.821 [2024-12-07 05:45:48.952965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.821 [2024-12-07 05:45:48.952971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.821 [2024-12-07 05:45:48.963979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.821 [2024-12-07 05:45:48.963995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.821 [2024-12-07 05:45:48.964001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.821 [2024-12-07 05:45:48.974378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.821 [2024-12-07 05:45:48.974395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.821 [2024-12-07 05:45:48.974401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.821 [2024-12-07 05:45:48.984999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.821 [2024-12-07 05:45:48.985020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.821 [2024-12-07 05:45:48.985027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.821 [2024-12-07 05:45:48.995659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.821 [2024-12-07 05:45:48.995676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.821 [2024-12-07 05:45:48.995683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.821 [2024-12-07 05:45:49.006876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.821 [2024-12-07 05:45:49.006893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.821 [2024-12-07 05:45:49.006899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.821 [2024-12-07 05:45:49.016451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.821 [2024-12-07 05:45:49.016468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.821 [2024-12-07 05:45:49.016474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.821 [2024-12-07 05:45:49.021660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.821 [2024-12-07 05:45:49.021682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.821 [2024-12-07 05:45:49.021688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:45.821 [2024-12-07 05:45:49.029888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.821 [2024-12-07 05:45:49.029905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.821 [2024-12-07 05:45:49.029911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.821 [2024-12-07 05:45:49.039981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.821 [2024-12-07 05:45:49.039998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.821 [2024-12-07 05:45:49.040004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:45.821 [2024-12-07 05:45:49.048728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:45.821 [2024-12-07 05:45:49.048745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.821 [2024-12-07 05:45:49.048752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.083 [2024-12-07 05:45:49.058379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.083 [2024-12-07 05:45:49.058395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.083 [2024-12-07 05:45:49.058402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.083 [2024-12-07 05:45:49.067191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.083 [2024-12-07 05:45:49.067208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.083 [2024-12-07 05:45:49.067214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.083 [2024-12-07 05:45:49.077863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.083 [2024-12-07 05:45:49.077881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.083 [2024-12-07 05:45:49.077888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.083 [2024-12-07 05:45:49.088111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.083 [2024-12-07 05:45:49.088129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.083 [2024-12-07 05:45:49.088136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.083 [2024-12-07 05:45:49.098954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.083 [2024-12-07 05:45:49.098971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.083 [2024-12-07 05:45:49.098978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.083 [2024-12-07 05:45:49.106928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.083 [2024-12-07 05:45:49.106945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.083 [2024-12-07 05:45:49.106952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.083 [2024-12-07 05:45:49.116195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.083 [2024-12-07 05:45:49.116213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.083 [2024-12-07 05:45:49.116219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.083 [2024-12-07 05:45:49.125532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.083 [2024-12-07 05:45:49.125549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.083 [2024-12-07 05:45:49.125555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.083 [2024-12-07 05:45:49.133660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.083 [2024-12-07 05:45:49.133677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.083 [2024-12-07 05:45:49.133684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.083 [2024-12-07 05:45:49.143644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.083 [2024-12-07 05:45:49.143661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.083 [2024-12-07 05:45:49.143667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.083 [2024-12-07 05:45:49.154000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.084 [2024-12-07 05:45:49.154021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.084 [2024-12-07 05:45:49.154028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.084 [2024-12-07 05:45:49.164018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.084 [2024-12-07 05:45:49.164035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.084 [2024-12-07 05:45:49.164041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.084 [2024-12-07 05:45:49.172953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.084 [2024-12-07 05:45:49.172970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.084 [2024-12-07 05:45:49.172977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.084 [2024-12-07 05:45:49.181736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.084 [2024-12-07 05:45:49.181754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.084 [2024-12-07 05:45:49.181764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.084 [2024-12-07 05:45:49.189301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.084 [2024-12-07 05:45:49.189319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.084 [2024-12-07 05:45:49.189325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.084 [2024-12-07 05:45:49.198066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.084 [2024-12-07 05:45:49.198083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.084 [2024-12-07 05:45:49.198090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.084 [2024-12-07 05:45:49.206655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.084 [2024-12-07 05:45:49.206672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.084 [2024-12-07 05:45:49.206679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.084 [2024-12-07 05:45:49.215904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.084 [2024-12-07 05:45:49.215922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.084 [2024-12-07 05:45:49.215928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.084 [2024-12-07 05:45:49.225703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.084 [2024-12-07 05:45:49.225721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.084 [2024-12-07 05:45:49.225727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.084 [2024-12-07 05:45:49.236348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.084 [2024-12-07 05:45:49.236366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.084 [2024-12-07 05:45:49.236373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.084 [2024-12-07 05:45:49.245806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.084 [2024-12-07 05:45:49.245824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.084 [2024-12-07 05:45:49.245830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.084 [2024-12-07 05:45:49.254423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.084 [2024-12-07 05:45:49.254441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.084 [2024-12-07 05:45:49.254448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.084 [2024-12-07 05:45:49.262829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.084 [2024-12-07 05:45:49.262853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.084 [2024-12-07 05:45:49.262860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.084 [2024-12-07 05:45:49.272290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.084 [2024-12-07 05:45:49.272308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.084 [2024-12-07 05:45:49.272315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.084 [2024-12-07 05:45:49.282409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.084 [2024-12-07 05:45:49.282426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.084 [2024-12-07 05:45:49.282432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.084 [2024-12-07 05:45:49.292916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.084 [2024-12-07 05:45:49.292933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.084 [2024-12-07 05:45:49.292940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.084 [2024-12-07 05:45:49.302041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.084 [2024-12-07 05:45:49.302058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.084 [2024-12-07 05:45:49.302065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.084 [2024-12-07 05:45:49.312562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.084 [2024-12-07 05:45:49.312579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.084 [2024-12-07 05:45:49.312586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.084 [2024-12-07 05:45:49.318665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.084 [2024-12-07 05:45:49.318683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.084 [2024-12-07 05:45:49.318689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.347 [2024-12-07 05:45:49.326138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.347 [2024-12-07 05:45:49.326156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.347 [2024-12-07 05:45:49.326162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.347 [2024-12-07 05:45:49.334726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.347 [2024-12-07 05:45:49.334744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.347 [2024-12-07 05:45:49.334750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.347 [2024-12-07 05:45:49.344457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.347 [2024-12-07 05:45:49.344474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.347 [2024-12-07 05:45:49.344481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.347 [2024-12-07 05:45:49.353306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.347 [2024-12-07 05:45:49.353323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.347 [2024-12-07 05:45:49.353330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.347 [2024-12-07 05:45:49.362069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.347 [2024-12-07 05:45:49.362086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.347 [2024-12-07 05:45:49.362093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.347 [2024-12-07 05:45:49.370009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.347 [2024-12-07 05:45:49.370031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.347 [2024-12-07 05:45:49.370038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.347 [2024-12-07 05:45:49.380612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.347 [2024-12-07 05:45:49.380630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.347 [2024-12-07 05:45:49.380636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.347 [2024-12-07 05:45:49.388926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.347 [2024-12-07 05:45:49.388943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.347 [2024-12-07 05:45:49.388950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.347 [2024-12-07 05:45:49.399105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.347 [2024-12-07 05:45:49.399122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.347 [2024-12-07 05:45:49.399129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.347 [2024-12-07 05:45:49.409132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.347 [2024-12-07 05:45:49.409149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.347 [2024-12-07 05:45:49.409155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.347 [2024-12-07 05:45:49.417668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.347 [2024-12-07 05:45:49.417685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.347 [2024-12-07 05:45:49.417694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.347 [2024-12-07 05:45:49.425053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.347 [2024-12-07 05:45:49.425071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.347 [2024-12-07 05:45:49.425078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.347 [2024-12-07 05:45:49.430528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.347 [2024-12-07 05:45:49.430546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.347 [2024-12-07 05:45:49.430552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.347 [2024-12-07 05:45:49.436464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.347 [2024-12-07 05:45:49.436481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.347 [2024-12-07 05:45:49.436487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.347 [2024-12-07 05:45:49.440488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.347 [2024-12-07 05:45:49.440505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.347 [2024-12-07 05:45:49.440511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.347 [2024-12-07 05:45:49.445541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.347 [2024-12-07 05:45:49.445557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.347 [2024-12-07 05:45:49.445564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.347 [2024-12-07 05:45:49.454731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.347 [2024-12-07 05:45:49.454748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.347 [2024-12-07 05:45:49.454755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.347 [2024-12-07 05:45:49.461664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.347 [2024-12-07 05:45:49.461681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.347 [2024-12-07 05:45:49.461687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.347 [2024-12-07 05:45:49.473784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.347 [2024-12-07 05:45:49.473801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.347 [2024-12-07 05:45:49.473807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.347 [2024-12-07 05:45:49.483592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.347 [2024-12-07 05:45:49.483613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.347 [2024-12-07 05:45:49.483619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.347 [2024-12-07 05:45:49.491056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.347 [2024-12-07 05:45:49.491073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.347 [2024-12-07 05:45:49.491079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.347 [2024-12-07 05:45:49.499749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.347 [2024-12-07 05:45:49.499766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.347 [2024-12-07 05:45:49.499772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.347 [2024-12-07 05:45:49.507965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.347 [2024-12-07 05:45:49.507983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.347 [2024-12-07 05:45:49.507989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.347 [2024-12-07 05:45:49.517635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.347 [2024-12-07 05:45:49.517651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.347 [2024-12-07 05:45:49.517657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.347 [2024-12-07 05:45:49.526799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.347 [2024-12-07 05:45:49.526817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.348 [2024-12-07 05:45:49.526823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.348 [2024-12-07 05:45:49.536931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.348 [2024-12-07 05:45:49.536949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.348 [2024-12-07 05:45:49.536955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.348 [2024-12-07 05:45:49.545479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.348 [2024-12-07 05:45:49.545498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.348 [2024-12-07 05:45:49.545504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.348 [2024-12-07 05:45:49.557444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.348 [2024-12-07 05:45:49.557461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.348 [2024-12-07 05:45:49.557468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.348 [2024-12-07 05:45:49.567860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.348 [2024-12-07 05:45:49.567878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.348 [2024-12-07 05:45:49.567884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.348 [2024-12-07 05:45:49.577854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.348 [2024-12-07 05:45:49.577872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.348 [2024-12-07 05:45:49.577878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.610 [2024-12-07 05:45:49.587318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.610 [2024-12-07 05:45:49.587336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.610 [2024-12-07 05:45:49.587342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.610 [2024-12-07 05:45:49.598461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.610 [2024-12-07 05:45:49.598479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.610 [2024-12-07 05:45:49.598485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.610 [2024-12-07 05:45:49.609296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.610 [2024-12-07 05:45:49.609314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.610 [2024-12-07 05:45:49.609320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.610 [2024-12-07 05:45:49.617492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.610 [2024-12-07 05:45:49.617510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.610 [2024-12-07 05:45:49.617516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.610 [2024-12-07 05:45:49.628529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.610 [2024-12-07 05:45:49.628546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.610 [2024-12-07 05:45:49.628553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.610 [2024-12-07 05:45:49.641460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.610 [2024-12-07 05:45:49.641478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.610 [2024-12-07 05:45:49.641484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.610 [2024-12-07 05:45:49.653529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.610 [2024-12-07 05:45:49.653550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.610 [2024-12-07 05:45:49.653556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.610 [2024-12-07 05:45:49.661824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.610 [2024-12-07 05:45:49.661842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.610 [2024-12-07 05:45:49.661848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.610 [2024-12-07 05:45:49.671907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.610 [2024-12-07 05:45:49.671923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.610 [2024-12-07 05:45:49.671930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.610 [2024-12-07 05:45:49.680621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.610 [2024-12-07 05:45:49.680639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.610 [2024-12-07 05:45:49.680646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.610 [2024-12-07 05:45:49.689812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.610 [2024-12-07 05:45:49.689831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.610 [2024-12-07 05:45:49.689837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.610 [2024-12-07 05:45:49.700117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.610 [2024-12-07 05:45:49.700133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.610 [2024-12-07 05:45:49.700140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.610 [2024-12-07 05:45:49.710449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.610 [2024-12-07 05:45:49.710466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.610 [2024-12-07 05:45:49.710473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.611 [2024-12-07 05:45:49.722018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.611 [2024-12-07 05:45:49.722036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.611 [2024-12-07 05:45:49.722042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.611 [2024-12-07 05:45:49.732241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.611 [2024-12-07 05:45:49.732258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.611 [2024-12-07 05:45:49.732264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.611 [2024-12-07 05:45:49.743163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.611 [2024-12-07 05:45:49.743181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.611 [2024-12-07 05:45:49.743187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.611 [2024-12-07 05:45:49.754339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.611 [2024-12-07 05:45:49.754357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.611 [2024-12-07 05:45:49.754363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.611 [2024-12-07 05:45:49.764158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.611 [2024-12-07 05:45:49.764176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.611 [2024-12-07 05:45:49.764182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.611 [2024-12-07 05:45:49.771459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.611 [2024-12-07 05:45:49.771476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.611 [2024-12-07 05:45:49.771483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.611 [2024-12-07 05:45:49.781641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.611 [2024-12-07 05:45:49.781659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.611 [2024-12-07 05:45:49.781665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.611 [2024-12-07 05:45:49.792741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.611 [2024-12-07 05:45:49.792758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.611 [2024-12-07 05:45:49.792765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.611 [2024-12-07 05:45:49.803914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.611 [2024-12-07 05:45:49.803932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.611 [2024-12-07 05:45:49.803939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.611 [2024-12-07 05:45:49.815957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.611 [2024-12-07 05:45:49.815975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.611 [2024-12-07 05:45:49.815981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.611 [2024-12-07 05:45:49.827540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.611 [2024-12-07 05:45:49.827557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.611 [2024-12-07 05:45:49.827567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.611 [2024-12-07 05:45:49.838135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.611 [2024-12-07 05:45:49.838153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.611 [2024-12-07 05:45:49.838159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.873 [2024-12-07 05:45:49.849363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.873 [2024-12-07 05:45:49.849381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.873 [2024-12-07 05:45:49.849388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.873 [2024-12-07 05:45:49.861086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.873 [2024-12-07 05:45:49.861103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.873 [2024-12-07 05:45:49.861110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.873 [2024-12-07 05:45:49.872094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.873 [2024-12-07 05:45:49.872112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.873 [2024-12-07 05:45:49.872119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.873 [2024-12-07 05:45:49.884099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.873 [2024-12-07 05:45:49.884117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.873 [2024-12-07 05:45:49.884125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.873 [2024-12-07 05:45:49.895378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.873 [2024-12-07 05:45:49.895396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.873 [2024-12-07 05:45:49.895402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.873 [2024-12-07 05:45:49.905396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.873 [2024-12-07 05:45:49.905414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.873 [2024-12-07 05:45:49.905421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.873 [2024-12-07 05:45:49.913504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.873 [2024-12-07 05:45:49.913522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.873 [2024-12-07 05:45:49.913528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.873 [2024-12-07 05:45:49.925029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.873 [2024-12-07 05:45:49.925049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.873 [2024-12-07 05:45:49.925056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.873 [2024-12-07 05:45:49.937069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.873 [2024-12-07 05:45:49.937087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.873 [2024-12-07 05:45:49.937093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.873 [2024-12-07 05:45:49.948414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.873 [2024-12-07 05:45:49.948432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.873 [2024-12-07 05:45:49.948440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.873 [2024-12-07 05:45:49.960258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.873 [2024-12-07 05:45:49.960276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.873 [2024-12-07 05:45:49.960282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.873 [2024-12-07 05:45:49.971401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.873 [2024-12-07 05:45:49.971418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.873 [2024-12-07 05:45:49.971424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.873 [2024-12-07 05:45:49.982810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.873 [2024-12-07 05:45:49.982827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.873 [2024-12-07 05:45:49.982834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:46.873 [2024-12-07 05:45:49.994973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.873 [2024-12-07 05:45:49.994991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.873 [2024-12-07 05:45:49.994998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.873 [2024-12-07 05:45:50.007927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.873 [2024-12-07 05:45:50.007946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.873 [2024-12-07 05:45:50.007952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.873 [2024-12-07 05:45:50.018655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1520210) 00:30:46.873 [2024-12-07 05:45:50.018674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.874 [2024-12-07 05:45:50.018682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:46.874 00:30:46.874 Latency(us) 00:30:46.874 [2024-12-07T04:45:50.114Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:46.874 [2024-12-07T04:45:50.114Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:46.874 nvme0n1 : 2.05 3310.35 413.79 0.00 0.00 4739.27 675.84 48059.73 00:30:46.874 [2024-12-07T04:45:50.114Z] =================================================================================================================== 00:30:46.874 [2024-12-07T04:45:50.114Z] Total : 3310.35 413.79 0.00 0.00 4739.27 675.84 48059.73 00:30:46.874 0 00:30:46.874 05:45:50 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:46.874 05:45:50 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:46.874 05:45:50 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:46.874 | .driver_specific 00:30:46.874 | .nvme_error 00:30:46.874 | .status_code 00:30:46.874 | .command_transient_transport_error' 00:30:46.874 05:45:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:47.135 05:45:50 -- host/digest.sh@71 -- # (( 218 > 0 )) 00:30:47.135 05:45:50 -- host/digest.sh@73 -- # killprocess 2020336 00:30:47.136 05:45:50 -- common/autotest_common.sh@936 -- # '[' -z 2020336 ']' 00:30:47.136 05:45:50 -- common/autotest_common.sh@940 -- # kill -0 2020336 00:30:47.136 05:45:50 -- common/autotest_common.sh@941 -- # uname 00:30:47.136 05:45:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:47.136 05:45:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2020336 00:30:47.136 05:45:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:47.136 05:45:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:47.136 05:45:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2020336' 00:30:47.136 killing process with pid 2020336 00:30:47.136 05:45:50 -- common/autotest_common.sh@955 -- # kill 2020336 00:30:47.136 Received shutdown signal, test time was about 2.000000 seconds 00:30:47.136 00:30:47.136 Latency(us) 00:30:47.136 [2024-12-07T04:45:50.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:47.136 [2024-12-07T04:45:50.376Z] =================================================================================================================== 00:30:47.136 [2024-12-07T04:45:50.376Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:47.136 05:45:50 -- common/autotest_common.sh@960 -- # wait 2020336 00:30:47.398 05:45:50 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:30:47.398 05:45:50 -- host/digest.sh@54 -- # local rw bs qd 00:30:47.398 05:45:50 -- host/digest.sh@56 -- # rw=randwrite 00:30:47.398 05:45:50 -- host/digest.sh@56 -- # bs=4096 00:30:47.398 05:45:50 -- host/digest.sh@56 -- # qd=128 00:30:47.398 05:45:50 -- host/digest.sh@58 -- # bperfpid=2021168 00:30:47.398 05:45:50 -- host/digest.sh@60 -- # waitforlisten 2021168 /var/tmp/bperf.sock 00:30:47.398 05:45:50 -- common/autotest_common.sh@829 -- # '[' -z 2021168 ']' 00:30:47.398 05:45:50 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:30:47.398 05:45:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:47.398 05:45:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:47.398 05:45:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:47.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:47.398 05:45:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:47.398 05:45:50 -- common/autotest_common.sh@10 -- # set +x 00:30:47.398 [2024-12-07 05:45:50.475884] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:30:47.398 [2024-12-07 05:45:50.475942] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2021168 ] 00:30:47.398 EAL: No free 2048 kB hugepages reported on node 1 00:30:47.398 [2024-12-07 05:45:50.554137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:47.398 [2024-12-07 05:45:50.605836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:48.339 05:45:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:48.339 05:45:51 -- common/autotest_common.sh@862 -- # return 0 00:30:48.339 05:45:51 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:48.339 05:45:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:48.339 05:45:51 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:48.339 05:45:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.339 05:45:51 -- common/autotest_common.sh@10 -- # set +x 00:30:48.339 05:45:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.339 05:45:51 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:48.339 05:45:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:48.599 nvme0n1 00:30:48.599 05:45:51 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:48.599 05:45:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.599 05:45:51 -- common/autotest_common.sh@10 -- # set +x 00:30:48.599 05:45:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.599 05:45:51 -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:48.599 05:45:51 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:48.599 Running I/O for 2 seconds... 00:30:48.599 [2024-12-07 05:45:51.817501] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f9f68 00:30:48.599 [2024-12-07 05:45:51.818089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.599 [2024-12-07 05:45:51.818114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:48.599 [2024-12-07 05:45:51.829707] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190eea00 00:30:48.599 [2024-12-07 05:45:51.830607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.599 [2024-12-07 05:45:51.830626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.860 [2024-12-07 05:45:51.841809] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f46d0 00:30:48.860 [2024-12-07 05:45:51.842665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.860 [2024-12-07 05:45:51.842682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:48.860 [2024-12-07 05:45:51.852021] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f8e88 00:30:48.860 [2024-12-07 05:45:51.852267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.860 [2024-12-07 05:45:51.852282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:48.860 [2024-12-07 05:45:51.865761] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f8618 00:30:48.860 [2024-12-07 05:45:51.866711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.860 [2024-12-07 05:45:51.866727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:48.860 [2024-12-07 05:45:51.876004] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190ea248 00:30:48.860 [2024-12-07 05:45:51.876919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.860 [2024-12-07 05:45:51.876935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:48.860 [2024-12-07 05:45:51.887333] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190ed4e8 00:30:48.860 [2024-12-07 05:45:51.888201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.860 [2024-12-07 05:45:51.888217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:48.860 [2024-12-07 05:45:51.900464] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190ecc78 00:30:48.860 [2024-12-07 05:45:51.901718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.860 [2024-12-07 05:45:51.901735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:48.860 [2024-12-07 05:45:51.910949] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e1710 00:30:48.860 [2024-12-07 05:45:51.911951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.860 [2024-12-07 05:45:51.911968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:48.860 [2024-12-07 05:45:51.922674] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190eaab8 00:30:48.860 [2024-12-07 05:45:51.923964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.860 [2024-12-07 05:45:51.923980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:48.860 [2024-12-07 05:45:51.934110] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e5a90 00:30:48.860 [2024-12-07 05:45:51.935386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.860 [2024-12-07 05:45:51.935402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:48.860 [2024-12-07 05:45:51.945589] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190fc560 00:30:48.860 [2024-12-07 05:45:51.946840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.860 [2024-12-07 05:45:51.946855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:48.861 [2024-12-07 05:45:51.956965] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190eb760 00:30:48.861 [2024-12-07 05:45:51.958231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.861 [2024-12-07 05:45:51.958247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:48.861 [2024-12-07 05:45:51.968310] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e84c0 00:30:48.861 [2024-12-07 05:45:51.969555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.861 [2024-12-07 05:45:51.969574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:48.861 [2024-12-07 05:45:51.979670] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f9b30 00:30:48.861 [2024-12-07 05:45:51.980906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.861 [2024-12-07 05:45:51.980922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.861 [2024-12-07 05:45:51.991025] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e4578 00:30:48.861 [2024-12-07 05:45:51.992259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.861 [2024-12-07 05:45:51.992275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:48.861 [2024-12-07 05:45:52.001981] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f5be8 00:30:48.861 [2024-12-07 05:45:52.002799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.861 [2024-12-07 05:45:52.002815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:48.861 [2024-12-07 05:45:52.013697] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190fbcf0 00:30:48.861 [2024-12-07 05:45:52.014784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.861 [2024-12-07 05:45:52.014800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:48.861 [2024-12-07 05:45:52.025099] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f0788 00:30:48.861 [2024-12-07 05:45:52.026158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.861 [2024-12-07 05:45:52.026174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:48.861 [2024-12-07 05:45:52.036461] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e4140 00:30:48.861 [2024-12-07 05:45:52.037505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.861 [2024-12-07 05:45:52.037521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:48.861 [2024-12-07 05:45:52.047817] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190ed0b0 00:30:48.861 [2024-12-07 05:45:52.048894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.861 [2024-12-07 05:45:52.048910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:48.861 [2024-12-07 05:45:52.059159] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f5be8 00:30:48.861 [2024-12-07 05:45:52.060213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.861 [2024-12-07 05:45:52.060229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:48.861 [2024-12-07 05:45:52.070525] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190fb480 00:30:48.861 [2024-12-07 05:45:52.071577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.861 [2024-12-07 05:45:52.071594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:48.861 [2024-12-07 05:45:52.082200] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190fc998 00:30:48.861 [2024-12-07 05:45:52.083141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.861 [2024-12-07 05:45:52.083158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:48.861 [2024-12-07 05:45:52.093659] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190fb048 00:30:48.861 [2024-12-07 05:45:52.094034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.861 [2024-12-07 05:45:52.094050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:49.122 [2024-12-07 05:45:52.105318] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f9f68 00:30:49.122 [2024-12-07 05:45:52.105695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.122 [2024-12-07 05:45:52.105710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:49.122 [2024-12-07 05:45:52.118250] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190fb048 00:30:49.122 [2024-12-07 05:45:52.119270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.122 [2024-12-07 05:45:52.119286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:49.122 [2024-12-07 05:45:52.128985] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190eaab8 00:30:49.122 [2024-12-07 05:45:52.129805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.122 [2024-12-07 05:45:52.129821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.123 [2024-12-07 05:45:52.139945] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190ecc78 00:30:49.123 [2024-12-07 05:45:52.140986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.123 [2024-12-07 05:45:52.141002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.123 [2024-12-07 05:45:52.151508] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190eaef0 00:30:49.123 [2024-12-07 05:45:52.152568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.123 [2024-12-07 05:45:52.152584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:49.123 [2024-12-07 05:45:52.162952] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f5378 00:30:49.123 [2024-12-07 05:45:52.163500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.123 [2024-12-07 05:45:52.163516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:49.123 [2024-12-07 05:45:52.176247] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f96f8 00:30:49.123 [2024-12-07 05:45:52.177584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.123 [2024-12-07 05:45:52.177600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:49.123 [2024-12-07 05:45:52.186545] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190eea00 00:30:49.123 [2024-12-07 05:45:52.187401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.123 [2024-12-07 05:45:52.187417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:49.123 [2024-12-07 05:45:52.197861] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e0ea0 00:30:49.123 [2024-12-07 05:45:52.199256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.123 [2024-12-07 05:45:52.199272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:49.123 [2024-12-07 05:45:52.209250] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190ef6a8 00:30:49.123 [2024-12-07 05:45:52.210638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.123 [2024-12-07 05:45:52.210654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:49.123 [2024-12-07 05:45:52.219852] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190edd58 00:30:49.123 [2024-12-07 05:45:52.220507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.123 [2024-12-07 05:45:52.220523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:49.123 [2024-12-07 05:45:52.231207] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f0788 00:30:49.123 [2024-12-07 05:45:52.231969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.123 [2024-12-07 05:45:52.231987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:49.123 [2024-12-07 05:45:52.243233] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190df550 00:30:49.123 [2024-12-07 05:45:52.244583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.123 [2024-12-07 05:45:52.244600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:49.123 [2024-12-07 05:45:52.254748] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f57b0 00:30:49.123 [2024-12-07 05:45:52.255409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.123 [2024-12-07 05:45:52.255424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:49.123 [2024-12-07 05:45:52.265490] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f8e88 00:30:49.123 [2024-12-07 05:45:52.266592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.123 [2024-12-07 05:45:52.266608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:49.123 [2024-12-07 05:45:52.276865] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f0788 00:30:49.123 [2024-12-07 05:45:52.277761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.123 [2024-12-07 05:45:52.277777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:49.123 [2024-12-07 05:45:52.289690] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e3060 00:30:49.123 [2024-12-07 05:45:52.291222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.123 [2024-12-07 05:45:52.291238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:49.123 [2024-12-07 05:45:52.301049] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190ecc78 00:30:49.123 [2024-12-07 05:45:52.302575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.123 [2024-12-07 05:45:52.302591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:49.123 [2024-12-07 05:45:52.311852] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190eea00 00:30:49.123 [2024-12-07 05:45:52.312965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.123 [2024-12-07 05:45:52.312980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:49.123 [2024-12-07 05:45:52.321707] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f3e60 00:30:49.123 [2024-12-07 05:45:52.322422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.123 [2024-12-07 05:45:52.322438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:49.123 [2024-12-07 05:45:52.333172] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190de8a8 00:30:49.123 [2024-12-07 05:45:52.333907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.123 [2024-12-07 05:45:52.333923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:49.123 [2024-12-07 05:45:52.346808] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190fac10 00:30:49.123 [2024-12-07 05:45:52.348220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.123 [2024-12-07 05:45:52.348237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.123 [2024-12-07 05:45:52.356547] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e12d8 00:30:49.123 [2024-12-07 05:45:52.356999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.123 [2024-12-07 05:45:52.357019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:49.384 [2024-12-07 05:45:52.367840] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190de038 00:30:49.384 [2024-12-07 05:45:52.368738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.384 [2024-12-07 05:45:52.368757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.384 [2024-12-07 05:45:52.379225] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190eea00 00:30:49.384 [2024-12-07 05:45:52.380115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.384 [2024-12-07 05:45:52.380131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.384 [2024-12-07 05:45:52.390578] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190de470 00:30:49.384 [2024-12-07 05:45:52.391510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.384 [2024-12-07 05:45:52.391527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:49.384 [2024-12-07 05:45:52.401928] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f6020 00:30:49.384 [2024-12-07 05:45:52.402832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.384 [2024-12-07 05:45:52.402848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:49.384 [2024-12-07 05:45:52.413298] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f46d0 00:30:49.384 [2024-12-07 05:45:52.414216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.384 [2024-12-07 05:45:52.414231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:49.384 [2024-12-07 05:45:52.424632] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190ea680 00:30:49.384 [2024-12-07 05:45:52.425541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.384 [2024-12-07 05:45:52.425557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.384 [2024-12-07 05:45:52.435976] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190fac10 00:30:49.384 [2024-12-07 05:45:52.436878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.384 [2024-12-07 05:45:52.436894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.384 [2024-12-07 05:45:52.447349] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190fbcf0 00:30:49.385 [2024-12-07 05:45:52.448245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.385 [2024-12-07 05:45:52.448261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:49.385 [2024-12-07 05:45:52.458705] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e4140 00:30:49.385 [2024-12-07 05:45:52.459544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.385 [2024-12-07 05:45:52.459560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:49.385 [2024-12-07 05:45:52.470086] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190eea00 00:30:49.385 [2024-12-07 05:45:52.470921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.385 [2024-12-07 05:45:52.470937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:49.385 [2024-12-07 05:45:52.481433] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f3a28 00:30:49.385 [2024-12-07 05:45:52.482252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.385 [2024-12-07 05:45:52.482268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:49.385 [2024-12-07 05:45:52.494002] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e38d0 00:30:49.385 [2024-12-07 05:45:52.494985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.385 [2024-12-07 05:45:52.495001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.385 [2024-12-07 05:45:52.506826] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190df118 00:30:49.385 [2024-12-07 05:45:52.508394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.385 [2024-12-07 05:45:52.508410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.385 [2024-12-07 05:45:52.518214] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f8e88 00:30:49.385 [2024-12-07 05:45:52.519783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.385 [2024-12-07 05:45:52.519799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.385 [2024-12-07 05:45:52.529545] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190fe2e8 00:30:49.385 [2024-12-07 05:45:52.531101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.385 [2024-12-07 05:45:52.531117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.385 [2024-12-07 05:45:52.540876] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e3d08 00:30:49.385 [2024-12-07 05:45:52.542383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.385 [2024-12-07 05:45:52.542399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:49.385 [2024-12-07 05:45:52.552205] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f1430 00:30:49.385 [2024-12-07 05:45:52.553739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.385 [2024-12-07 05:45:52.553755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:49.385 [2024-12-07 05:45:52.563576] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190ebfd0 00:30:49.385 [2024-12-07 05:45:52.565110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.385 [2024-12-07 05:45:52.565126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:49.385 [2024-12-07 05:45:52.574977] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190ec408 00:30:49.385 [2024-12-07 05:45:52.576505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.385 [2024-12-07 05:45:52.576521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:49.385 [2024-12-07 05:45:52.585993] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e7c50 00:30:49.385 [2024-12-07 05:45:52.587163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.385 [2024-12-07 05:45:52.587179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:49.385 [2024-12-07 05:45:52.596139] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e38d0 00:30:49.385 [2024-12-07 05:45:52.596527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.385 [2024-12-07 05:45:52.596543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.385 [2024-12-07 05:45:52.607433] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f4f40 00:30:49.385 [2024-12-07 05:45:52.608341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.385 [2024-12-07 05:45:52.608357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.385 [2024-12-07 05:45:52.618851] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e3d08 00:30:49.385 [2024-12-07 05:45:52.619791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.385 [2024-12-07 05:45:52.619806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:49.646 [2024-12-07 05:45:52.630204] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f1868 00:30:49.646 [2024-12-07 05:45:52.631129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.646 [2024-12-07 05:45:52.631145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:49.646 [2024-12-07 05:45:52.641536] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e3060 00:30:49.646 [2024-12-07 05:45:52.642440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.646 [2024-12-07 05:45:52.642456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.646 [2024-12-07 05:45:52.652888] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190fb480 00:30:49.646 [2024-12-07 05:45:52.653790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.646 [2024-12-07 05:45:52.653807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.646 [2024-12-07 05:45:52.664229] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f6020 00:30:49.646 [2024-12-07 05:45:52.665136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.646 [2024-12-07 05:45:52.665154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:49.646 [2024-12-07 05:45:52.675585] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190ebfd0 00:30:49.646 [2024-12-07 05:45:52.676433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.646 [2024-12-07 05:45:52.676448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:49.646 [2024-12-07 05:45:52.686931] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e73e0 00:30:49.646 [2024-12-07 05:45:52.687796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.646 [2024-12-07 05:45:52.687812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:49.646 [2024-12-07 05:45:52.698320] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e5a90 00:30:49.646 [2024-12-07 05:45:52.699179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.646 [2024-12-07 05:45:52.699195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:49.646 [2024-12-07 05:45:52.709652] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190ec408 00:30:49.646 [2024-12-07 05:45:52.710500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.646 [2024-12-07 05:45:52.710516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:49.646 [2024-12-07 05:45:52.722472] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190ee5c8 00:30:49.646 [2024-12-07 05:45:52.723530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.646 [2024-12-07 05:45:52.723545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.646 [2024-12-07 05:45:52.734827] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190eee38 00:30:49.646 [2024-12-07 05:45:52.736191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.646 [2024-12-07 05:45:52.736207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:49.646 [2024-12-07 05:45:52.746246] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e84c0 00:30:49.646 [2024-12-07 05:45:52.747594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.646 [2024-12-07 05:45:52.747610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:49.646 [2024-12-07 05:45:52.757691] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e7c50 00:30:49.646 [2024-12-07 05:45:52.759111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.646 [2024-12-07 05:45:52.759127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:49.646 [2024-12-07 05:45:52.768747] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190fb480 00:30:49.646 [2024-12-07 05:45:52.769957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.646 [2024-12-07 05:45:52.769973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:49.646 [2024-12-07 05:45:52.780135] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190ebb98 00:30:49.646 [2024-12-07 05:45:52.781366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.646 [2024-12-07 05:45:52.781382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:49.646 [2024-12-07 05:45:52.791461] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190ef6a8 00:30:49.646 [2024-12-07 05:45:52.792691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.646 [2024-12-07 05:45:52.792707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:49.646 [2024-12-07 05:45:52.801863] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e5a90 00:30:49.646 [2024-12-07 05:45:52.802323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.646 [2024-12-07 05:45:52.802339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:49.646 [2024-12-07 05:45:52.812736] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190efae0 00:30:49.646 [2024-12-07 05:45:52.813390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.646 [2024-12-07 05:45:52.813406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:49.646 [2024-12-07 05:45:52.825664] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e8d30 00:30:49.646 [2024-12-07 05:45:52.826301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.646 [2024-12-07 05:45:52.826318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:49.646 [2024-12-07 05:45:52.836947] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190ef270 00:30:49.646 [2024-12-07 05:45:52.837439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.646 [2024-12-07 05:45:52.837456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:49.646 [2024-12-07 05:45:52.848477] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e4578 00:30:49.646 [2024-12-07 05:45:52.849088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.646 [2024-12-07 05:45:52.849104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:49.646 [2024-12-07 05:45:52.859812] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190ee190 00:30:49.646 [2024-12-07 05:45:52.860383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.646 [2024-12-07 05:45:52.860399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:49.646 [2024-12-07 05:45:52.871265] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f57b0 00:30:49.646 [2024-12-07 05:45:52.871844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.646 [2024-12-07 05:45:52.871860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:49.646 [2024-12-07 05:45:52.882717] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190df118 00:30:49.907 [2024-12-07 05:45:52.883361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.907 [2024-12-07 05:45:52.883377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:49.907 [2024-12-07 05:45:52.894076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190ee190 00:30:49.907 [2024-12-07 05:45:52.894648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.907 [2024-12-07 05:45:52.894665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:49.907 [2024-12-07 05:45:52.905502] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e7818 00:30:49.907 [2024-12-07 05:45:52.906057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.907 [2024-12-07 05:45:52.906072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:49.907 [2024-12-07 05:45:52.916894] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e23b8 00:30:49.907 [2024-12-07 05:45:52.917441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.907 [2024-12-07 05:45:52.917457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:49.907 [2024-12-07 05:45:52.928269] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190ee5c8 00:30:49.907 [2024-12-07 05:45:52.928802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.907 [2024-12-07 05:45:52.928818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:49.907 [2024-12-07 05:45:52.939601] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190fb8b8 00:30:49.907 [2024-12-07 05:45:52.940098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.907 [2024-12-07 05:45:52.940114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:49.907 [2024-12-07 05:45:52.950937] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e0a68 00:30:49.907 [2024-12-07 05:45:52.951453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.908 [2024-12-07 05:45:52.951469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:49.908 [2024-12-07 05:45:52.962313] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f1430 00:30:49.908 [2024-12-07 05:45:52.962851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.908 [2024-12-07 05:45:52.962870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:49.908 [2024-12-07 05:45:52.973713] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190eea00 00:30:49.908 [2024-12-07 05:45:52.974199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.908 [2024-12-07 05:45:52.974215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:49.908 [2024-12-07 05:45:52.985035] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e0630 00:30:49.908 [2024-12-07 05:45:52.985384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.908 [2024-12-07 05:45:52.985400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:49.908 [2024-12-07 05:45:52.996361] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190efae0 00:30:49.908 [2024-12-07 05:45:52.996697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.908 [2024-12-07 05:45:52.996713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:49.908 [2024-12-07 05:45:53.007743] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f96f8 00:30:49.908 [2024-12-07 05:45:53.008213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.908 [2024-12-07 05:45:53.008230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:49.908 [2024-12-07 05:45:53.019127] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f1868 00:30:49.908 [2024-12-07 05:45:53.019611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.908 [2024-12-07 05:45:53.019627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:49.908 [2024-12-07 05:45:53.030553] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e9e10 00:30:49.908 [2024-12-07 05:45:53.031003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.908 [2024-12-07 05:45:53.031022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:49.908 [2024-12-07 05:45:53.041949] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190eb328 00:30:49.908 [2024-12-07 05:45:53.042354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.908 [2024-12-07 05:45:53.042370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:49.908 [2024-12-07 05:45:53.053293] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190eea00 00:30:49.908 [2024-12-07 05:45:53.053713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.908 [2024-12-07 05:45:53.053729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:49.908 [2024-12-07 05:45:53.064680] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190ec840 00:30:49.908 [2024-12-07 05:45:53.064950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.908 [2024-12-07 05:45:53.064965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:49.908 [2024-12-07 05:45:53.077734] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190fdeb0 00:30:49.908 [2024-12-07 05:45:53.079050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.908 [2024-12-07 05:45:53.079066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:49.908 [2024-12-07 05:45:53.087591] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190eb328 00:30:49.908 [2024-12-07 05:45:53.088548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.908 [2024-12-07 05:45:53.088564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:49.908 [2024-12-07 05:45:53.099305] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190eea00 00:30:49.908 [2024-12-07 05:45:53.100336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.908 [2024-12-07 05:45:53.100353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:49.908 [2024-12-07 05:45:53.111243] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190dfdc0 00:30:49.908 [2024-12-07 05:45:53.112027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.908 [2024-12-07 05:45:53.112043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:49.908 [2024-12-07 05:45:53.122062] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f2510 00:30:49.908 [2024-12-07 05:45:53.123033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.908 [2024-12-07 05:45:53.123049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:49.908 [2024-12-07 05:45:53.133539] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190fe720 00:30:49.908 [2024-12-07 05:45:53.134603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.908 [2024-12-07 05:45:53.134619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:50.169 [2024-12-07 05:45:53.146652] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190ed0b0 00:30:50.169 [2024-12-07 05:45:53.148113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.169 [2024-12-07 05:45:53.148129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:50.169 [2024-12-07 05:45:53.157014] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f5be8 00:30:50.169 [2024-12-07 05:45:53.158008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.169 [2024-12-07 05:45:53.158027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:50.169 [2024-12-07 05:45:53.168160] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190fdeb0 00:30:50.169 [2024-12-07 05:45:53.169343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.169 [2024-12-07 05:45:53.169358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:50.169 [2024-12-07 05:45:53.179568] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e6738 00:30:50.169 [2024-12-07 05:45:53.180751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.169 [2024-12-07 05:45:53.180766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:50.169 [2024-12-07 05:45:53.190908] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f46d0 00:30:50.169 [2024-12-07 05:45:53.192136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.169 [2024-12-07 05:45:53.192152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:50.169 [2024-12-07 05:45:53.202248] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190fb480 00:30:50.169 [2024-12-07 05:45:53.203445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.169 [2024-12-07 05:45:53.203461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:50.169 [2024-12-07 05:45:53.212824] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190de038 00:30:50.169 [2024-12-07 05:45:53.213301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.169 [2024-12-07 05:45:53.213316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:50.169 [2024-12-07 05:45:53.225043] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190df118 00:30:50.169 [2024-12-07 05:45:53.226195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.169 [2024-12-07 05:45:53.226210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:50.169 [2024-12-07 05:45:53.236956] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190fb480 00:30:50.169 [2024-12-07 05:45:53.237817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.169 [2024-12-07 05:45:53.237833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:50.169 [2024-12-07 05:45:53.247197] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190ff3c8 00:30:50.169 [2024-12-07 05:45:53.247963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.169 [2024-12-07 05:45:53.247979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:50.169 [2024-12-07 05:45:53.261020] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f4f40 00:30:50.169 [2024-12-07 05:45:53.262554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.169 [2024-12-07 05:45:53.262570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:50.169 [2024-12-07 05:45:53.271410] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190ee190 00:30:50.169 [2024-12-07 05:45:53.272485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.169 [2024-12-07 05:45:53.272500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:50.169 [2024-12-07 05:45:53.281687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190ec840 00:30:50.169 [2024-12-07 05:45:53.282260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.169 [2024-12-07 05:45:53.282275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:50.169 [2024-12-07 05:45:53.292405] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190fd208 00:30:50.169 [2024-12-07 05:45:53.292822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.169 [2024-12-07 05:45:53.292838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:50.169 [2024-12-07 05:45:53.305958] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f1ca0 00:30:50.169 [2024-12-07 05:45:53.307201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.169 [2024-12-07 05:45:53.307217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:50.169 [2024-12-07 05:45:53.315982] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f2948 00:30:50.169 [2024-12-07 05:45:53.316834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.169 [2024-12-07 05:45:53.316850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:50.169 [2024-12-07 05:45:53.327779] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190fbcf0 00:30:50.169 [2024-12-07 05:45:53.328861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.169 [2024-12-07 05:45:53.328877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:50.169 [2024-12-07 05:45:53.339212] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e0630 00:30:50.169 [2024-12-07 05:45:53.339775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.169 [2024-12-07 05:45:53.339790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:50.169 [2024-12-07 05:45:53.350685] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190ec840 00:30:50.169 [2024-12-07 05:45:53.351211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.169 [2024-12-07 05:45:53.351227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:50.170 [2024-12-07 05:45:53.362090] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190de8a8 00:30:50.170 [2024-12-07 05:45:53.362624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.170 [2024-12-07 05:45:53.362643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:50.170 [2024-12-07 05:45:53.373477] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e84c0 00:30:50.170 [2024-12-07 05:45:53.373969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.170 [2024-12-07 05:45:53.373985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:50.170 [2024-12-07 05:45:53.384858] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190dfdc0 00:30:50.170 [2024-12-07 05:45:53.385368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.170 [2024-12-07 05:45:53.385384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:50.170 [2024-12-07 05:45:53.396259] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f92c0 00:30:50.170 [2024-12-07 05:45:53.396746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.170 [2024-12-07 05:45:53.396762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:50.430 [2024-12-07 05:45:53.407644] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190dfdc0 00:30:50.430 [2024-12-07 05:45:53.408127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.430 [2024-12-07 05:45:53.408143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:50.430 [2024-12-07 05:45:53.419076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190df988 00:30:50.430 [2024-12-07 05:45:53.419587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.430 [2024-12-07 05:45:53.419603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:50.430 [2024-12-07 05:45:53.431982] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190dfdc0 00:30:50.430 [2024-12-07 05:45:53.433124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.430 [2024-12-07 05:45:53.433139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:50.430 [2024-12-07 05:45:53.443325] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e49b0 00:30:50.430 [2024-12-07 05:45:53.444562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.430 [2024-12-07 05:45:53.444578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:50.430 [2024-12-07 05:45:53.454825] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f2948 00:30:50.430 [2024-12-07 05:45:53.456367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.430 [2024-12-07 05:45:53.456383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:50.430 [2024-12-07 05:45:53.465201] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f7da8 00:30:50.430 [2024-12-07 05:45:53.465994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.430 [2024-12-07 05:45:53.466009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:50.430 [2024-12-07 05:45:53.476141] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e9e10 00:30:50.430 [2024-12-07 05:45:53.477161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.430 [2024-12-07 05:45:53.477176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:50.430 [2024-12-07 05:45:53.486842] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190edd58 00:30:50.430 [2024-12-07 05:45:53.487428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.430 [2024-12-07 05:45:53.487445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:50.430 [2024-12-07 05:45:53.498934] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190efae0 00:30:50.430 [2024-12-07 05:45:53.499656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.430 [2024-12-07 05:45:53.499672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:50.430 [2024-12-07 05:45:53.511019] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190ea248 00:30:50.430 [2024-12-07 05:45:53.511912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.430 [2024-12-07 05:45:53.511928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:50.430 [2024-12-07 05:45:53.521879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f96f8 00:30:50.430 [2024-12-07 05:45:53.522922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.430 [2024-12-07 05:45:53.522937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:50.430 [2024-12-07 05:45:53.533285] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f96f8 00:30:50.430 [2024-12-07 05:45:53.534346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.430 [2024-12-07 05:45:53.534362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:50.430 [2024-12-07 05:45:53.544166] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f31b8 00:30:50.430 [2024-12-07 05:45:53.545020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.430 [2024-12-07 05:45:53.545036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:50.430 [2024-12-07 05:45:53.554825] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e6300 00:30:50.430 [2024-12-07 05:45:53.555160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.430 [2024-12-07 05:45:53.555176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:50.430 [2024-12-07 05:45:53.567053] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f1430 00:30:50.430 [2024-12-07 05:45:53.568158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.430 [2024-12-07 05:45:53.568175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:50.430 [2024-12-07 05:45:53.579416] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f0350 00:30:50.430 [2024-12-07 05:45:53.580485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.430 [2024-12-07 05:45:53.580501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:50.430 [2024-12-07 05:45:53.589303] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f9f68 00:30:50.430 [2024-12-07 05:45:53.589958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.431 [2024-12-07 05:45:53.589973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:50.431 [2024-12-07 05:45:53.601030] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e49b0 00:30:50.431 [2024-12-07 05:45:53.601814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.431 [2024-12-07 05:45:53.601830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:50.431 [2024-12-07 05:45:53.612556] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e6b70 00:30:50.431 [2024-12-07 05:45:53.612772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.431 [2024-12-07 05:45:53.612787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:50.431 [2024-12-07 05:45:53.623956] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f2948 00:30:50.431 [2024-12-07 05:45:53.624163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.431 [2024-12-07 05:45:53.624178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:50.431 [2024-12-07 05:45:53.635404] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190eb760 00:30:50.431 [2024-12-07 05:45:53.635618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.431 [2024-12-07 05:45:53.635634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:50.431 [2024-12-07 05:45:53.648519] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190ef270 00:30:50.431 [2024-12-07 05:45:53.649714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.431 [2024-12-07 05:45:53.649730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:50.431 [2024-12-07 05:45:53.658388] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e4578 00:30:50.431 [2024-12-07 05:45:53.659116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.431 [2024-12-07 05:45:53.659135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:50.691 [2024-12-07 05:45:53.669857] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190fc560 00:30:50.691 [2024-12-07 05:45:53.670801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.691 [2024-12-07 05:45:53.670817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:50.691 [2024-12-07 05:45:53.682592] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e12d8 00:30:50.691 [2024-12-07 05:45:53.683565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.691 [2024-12-07 05:45:53.683581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:50.692 [2024-12-07 05:45:53.692852] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190fbcf0 00:30:50.692 [2024-12-07 05:45:53.693477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.692 [2024-12-07 05:45:53.693494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:50.692 [2024-12-07 05:45:53.704356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f6cc8 00:30:50.692 [2024-12-07 05:45:53.704998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.692 [2024-12-07 05:45:53.705018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:50.692 [2024-12-07 05:45:53.715777] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e5220 00:30:50.692 [2024-12-07 05:45:53.716414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.692 [2024-12-07 05:45:53.716431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:50.692 [2024-12-07 05:45:53.727248] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190f7970 00:30:50.692 [2024-12-07 05:45:53.727921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.692 [2024-12-07 05:45:53.727937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:50.692 [2024-12-07 05:45:53.738670] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190e0a68 00:30:50.692 [2024-12-07 05:45:53.739329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.692 [2024-12-07 05:45:53.739345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:50.692 [2024-12-07 05:45:53.750573] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190de8a8 00:30:50.692 [2024-12-07 05:45:53.750819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.692 [2024-12-07 05:45:53.750835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:50.692 [2024-12-07 05:45:53.762301] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190de8a8 00:30:50.692 [2024-12-07 05:45:53.762557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.692 [2024-12-07 05:45:53.762573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:50.692 [2024-12-07 05:45:53.774023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190de8a8 00:30:50.692 [2024-12-07 05:45:53.774283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.692 [2024-12-07 05:45:53.774298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:50.692 [2024-12-07 05:45:53.785712] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190de8a8 00:30:50.692 [2024-12-07 05:45:53.785956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.692 [2024-12-07 05:45:53.785973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:50.692 [2024-12-07 05:45:53.797402] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190de8a8 00:30:50.692 [2024-12-07 05:45:53.797637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.692 [2024-12-07 05:45:53.797653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:50.692 [2024-12-07 05:45:53.809048] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69760) with pdu=0x2000190de8a8 00:30:50.692 [2024-12-07 05:45:53.809273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.692 [2024-12-07 05:45:53.809288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:50.692 00:30:50.692 Latency(us) 00:30:50.692 [2024-12-07T04:45:53.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:50.692 [2024-12-07T04:45:53.932Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:50.692 nvme0n1 : 2.01 22252.15 86.92 0.00 0.00 5741.63 2280.11 14964.05 00:30:50.692 [2024-12-07T04:45:53.932Z] =================================================================================================================== 00:30:50.692 [2024-12-07T04:45:53.932Z] Total : 22252.15 86.92 0.00 0.00 5741.63 2280.11 14964.05 00:30:50.692 0 00:30:50.692 05:45:53 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:50.692 05:45:53 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:50.692 05:45:53 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:50.692 | .driver_specific 00:30:50.692 | .nvme_error 00:30:50.692 | .status_code 00:30:50.692 | .command_transient_transport_error' 00:30:50.692 05:45:53 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:50.953 05:45:53 -- host/digest.sh@71 -- # (( 175 > 0 )) 00:30:50.953 05:45:53 -- host/digest.sh@73 -- # killprocess 2021168 00:30:50.953 05:45:53 -- common/autotest_common.sh@936 -- # '[' -z 2021168 ']' 00:30:50.953 05:45:53 -- common/autotest_common.sh@940 -- # kill -0 2021168 00:30:50.953 05:45:53 -- common/autotest_common.sh@941 -- # uname 00:30:50.953 05:45:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:50.953 05:45:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2021168 00:30:50.953 05:45:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:50.953 05:45:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:50.953 05:45:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2021168' 00:30:50.953 killing process with pid 2021168 00:30:50.953 05:45:54 -- common/autotest_common.sh@955 -- # kill 2021168 00:30:50.953 Received shutdown signal, test time was about 2.000000 seconds 00:30:50.953 00:30:50.953 Latency(us) 00:30:50.953 [2024-12-07T04:45:54.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:50.953 [2024-12-07T04:45:54.193Z] =================================================================================================================== 00:30:50.953 [2024-12-07T04:45:54.193Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:50.953 05:45:54 -- common/autotest_common.sh@960 -- # wait 2021168 00:30:50.953 05:45:54 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:30:50.953 05:45:54 -- host/digest.sh@54 -- # local rw bs qd 00:30:50.953 05:45:54 -- host/digest.sh@56 -- # rw=randwrite 00:30:50.953 05:45:54 -- host/digest.sh@56 -- # bs=131072 00:30:50.953 05:45:54 -- host/digest.sh@56 -- # qd=16 00:30:50.953 05:45:54 -- host/digest.sh@58 -- # bperfpid=2021957 00:30:50.953 05:45:54 -- host/digest.sh@60 -- # waitforlisten 2021957 /var/tmp/bperf.sock 00:30:50.953 05:45:54 -- common/autotest_common.sh@829 -- # '[' -z 2021957 ']' 00:30:50.953 05:45:54 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:30:50.953 05:45:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:50.953 05:45:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:50.953 05:45:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:50.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:50.953 05:45:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:50.953 05:45:54 -- common/autotest_common.sh@10 -- # set +x 00:30:51.214 [2024-12-07 05:45:54.218062] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:30:51.214 [2024-12-07 05:45:54.218120] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2021957 ] 00:30:51.214 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:51.214 Zero copy mechanism will not be used. 00:30:51.214 EAL: No free 2048 kB hugepages reported on node 1 00:30:51.214 [2024-12-07 05:45:54.293346] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.214 [2024-12-07 05:45:54.345052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:51.786 05:45:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:51.786 05:45:54 -- common/autotest_common.sh@862 -- # return 0 00:30:51.786 05:45:54 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:51.786 05:45:54 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:52.045 05:45:55 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:52.045 05:45:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.045 05:45:55 -- common/autotest_common.sh@10 -- # set +x 00:30:52.045 05:45:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.045 05:45:55 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:52.045 05:45:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:52.305 nvme0n1 00:30:52.305 05:45:55 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:52.305 05:45:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.305 05:45:55 -- common/autotest_common.sh@10 -- # set +x 00:30:52.305 05:45:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.305 05:45:55 -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:52.305 05:45:55 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:52.566 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:52.566 Zero copy mechanism will not be used. 00:30:52.566 Running I/O for 2 seconds... 00:30:52.566 [2024-12-07 05:45:55.627707] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.566 [2024-12-07 05:45:55.627970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.566 [2024-12-07 05:45:55.627996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:52.566 [2024-12-07 05:45:55.638171] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.566 [2024-12-07 05:45:55.638444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.566 [2024-12-07 05:45:55.638463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:52.566 [2024-12-07 05:45:55.648348] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.566 [2024-12-07 05:45:55.648634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.566 [2024-12-07 05:45:55.648650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:52.566 [2024-12-07 05:45:55.657199] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.566 [2024-12-07 05:45:55.657467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.566 [2024-12-07 05:45:55.657483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.566 [2024-12-07 05:45:55.660887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.566 [2024-12-07 05:45:55.660956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.566 [2024-12-07 05:45:55.660973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:52.566 [2024-12-07 05:45:55.664269] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.566 [2024-12-07 05:45:55.664369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.566 [2024-12-07 05:45:55.664386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:52.566 [2024-12-07 05:45:55.667590] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.566 [2024-12-07 05:45:55.667660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.566 [2024-12-07 05:45:55.667676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:52.566 [2024-12-07 05:45:55.670995] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.566 [2024-12-07 05:45:55.671164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.566 [2024-12-07 05:45:55.671180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.567 [2024-12-07 05:45:55.674257] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.567 [2024-12-07 05:45:55.674385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.567 [2024-12-07 05:45:55.674405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:52.567 [2024-12-07 05:45:55.677577] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.567 [2024-12-07 05:45:55.677679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.567 [2024-12-07 05:45:55.677695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:52.567 [2024-12-07 05:45:55.680804] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.567 [2024-12-07 05:45:55.680881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.567 [2024-12-07 05:45:55.680896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:52.567 [2024-12-07 05:45:55.683999] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.567 [2024-12-07 05:45:55.684086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.567 [2024-12-07 05:45:55.684102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.567 [2024-12-07 05:45:55.687220] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.567 [2024-12-07 05:45:55.687284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.567 [2024-12-07 05:45:55.687299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:52.567 [2024-12-07 05:45:55.690859] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.567 [2024-12-07 05:45:55.690946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.567 [2024-12-07 05:45:55.690961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:52.567 [2024-12-07 05:45:55.694119] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.567 [2024-12-07 05:45:55.694208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.567 [2024-12-07 05:45:55.694224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:52.567 [2024-12-07 05:45:55.697471] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.567 [2024-12-07 05:45:55.697624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.567 [2024-12-07 05:45:55.697640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.567 [2024-12-07 05:45:55.700691] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.567 [2024-12-07 05:45:55.700815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.567 [2024-12-07 05:45:55.700831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:52.567 [2024-12-07 05:45:55.704032] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.567 [2024-12-07 05:45:55.704186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.567 [2024-12-07 05:45:55.704202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:52.567 [2024-12-07 05:45:55.708767] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.567 [2024-12-07 05:45:55.709069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.567 [2024-12-07 05:45:55.709086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:52.567 [2024-12-07 05:45:55.717645] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.567 [2024-12-07 05:45:55.717898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.567 [2024-12-07 05:45:55.717913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.567 [2024-12-07 05:45:55.727440] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.567 [2024-12-07 05:45:55.727665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.567 [2024-12-07 05:45:55.727681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:52.567 [2024-12-07 05:45:55.738007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.567 [2024-12-07 05:45:55.738253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.567 [2024-12-07 05:45:55.738267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:52.567 [2024-12-07 05:45:55.747760] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.567 [2024-12-07 05:45:55.748029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.567 [2024-12-07 05:45:55.748045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:52.567 [2024-12-07 05:45:55.758184] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.567 [2024-12-07 05:45:55.758456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.567 [2024-12-07 05:45:55.758471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.567 [2024-12-07 05:45:55.768353] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.567 [2024-12-07 05:45:55.768553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.567 [2024-12-07 05:45:55.768569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:52.567 [2024-12-07 05:45:55.778557] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.567 [2024-12-07 05:45:55.778821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.567 [2024-12-07 05:45:55.778836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:52.567 [2024-12-07 05:45:55.788859] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.567 [2024-12-07 05:45:55.789121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.567 [2024-12-07 05:45:55.789136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:52.567 [2024-12-07 05:45:55.800107] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.567 [2024-12-07 05:45:55.800388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.567 [2024-12-07 05:45:55.800405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.829 [2024-12-07 05:45:55.810537] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.829 [2024-12-07 05:45:55.810804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.829 [2024-12-07 05:45:55.810819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:52.829 [2024-12-07 05:45:55.820393] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.829 [2024-12-07 05:45:55.820680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.829 [2024-12-07 05:45:55.820696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:52.829 [2024-12-07 05:45:55.826198] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.829 [2024-12-07 05:45:55.826270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.829 [2024-12-07 05:45:55.826286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:52.829 [2024-12-07 05:45:55.829640] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.829 [2024-12-07 05:45:55.829707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.829 [2024-12-07 05:45:55.829722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.829 [2024-12-07 05:45:55.834565] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.829 [2024-12-07 05:45:55.834709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.829 [2024-12-07 05:45:55.834725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:52.829 [2024-12-07 05:45:55.841266] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.829 [2024-12-07 05:45:55.841414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.829 [2024-12-07 05:45:55.841429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:52.829 [2024-12-07 05:45:55.845002] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.829 [2024-12-07 05:45:55.845095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.829 [2024-12-07 05:45:55.845113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:52.829 [2024-12-07 05:45:55.850685] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.829 [2024-12-07 05:45:55.850943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.829 [2024-12-07 05:45:55.850958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.829 [2024-12-07 05:45:55.854953] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.829 [2024-12-07 05:45:55.855028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.829 [2024-12-07 05:45:55.855043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:52.829 [2024-12-07 05:45:55.859192] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.829 [2024-12-07 05:45:55.859298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.829 [2024-12-07 05:45:55.859313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:52.829 [2024-12-07 05:45:55.867483] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.829 [2024-12-07 05:45:55.867749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.829 [2024-12-07 05:45:55.867763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:52.829 [2024-12-07 05:45:55.873732] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.829 [2024-12-07 05:45:55.873868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.829 [2024-12-07 05:45:55.873884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.829 [2024-12-07 05:45:55.881394] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.829 [2024-12-07 05:45:55.881656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.829 [2024-12-07 05:45:55.881672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:52.829 [2024-12-07 05:45:55.887707] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.829 [2024-12-07 05:45:55.887771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.829 [2024-12-07 05:45:55.887787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:52.829 [2024-12-07 05:45:55.891175] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.829 [2024-12-07 05:45:55.891531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.829 [2024-12-07 05:45:55.891547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:52.829 [2024-12-07 05:45:55.895908] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.829 [2024-12-07 05:45:55.895988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.829 [2024-12-07 05:45:55.896004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.829 [2024-12-07 05:45:55.900860] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.829 [2024-12-07 05:45:55.900928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.830 [2024-12-07 05:45:55.900943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:52.830 [2024-12-07 05:45:55.904339] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.830 [2024-12-07 05:45:55.904423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.830 [2024-12-07 05:45:55.904438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:52.830 [2024-12-07 05:45:55.907695] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.830 [2024-12-07 05:45:55.907760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.830 [2024-12-07 05:45:55.907774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:52.830 [2024-12-07 05:45:55.911246] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.830 [2024-12-07 05:45:55.911351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.830 [2024-12-07 05:45:55.911366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.830 [2024-12-07 05:45:55.914719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.830 [2024-12-07 05:45:55.914832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.830 [2024-12-07 05:45:55.914847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:52.830 [2024-12-07 05:45:55.918003] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.830 [2024-12-07 05:45:55.918111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.830 [2024-12-07 05:45:55.918126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:52.830 [2024-12-07 05:45:55.921228] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.830 [2024-12-07 05:45:55.921301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.830 [2024-12-07 05:45:55.921316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:52.830 [2024-12-07 05:45:55.924452] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.830 [2024-12-07 05:45:55.924526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.830 [2024-12-07 05:45:55.924541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.830 [2024-12-07 05:45:55.927673] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.830 [2024-12-07 05:45:55.927761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.830 [2024-12-07 05:45:55.927775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:52.830 [2024-12-07 05:45:55.930911] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.830 [2024-12-07 05:45:55.930998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.830 [2024-12-07 05:45:55.931019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:52.830 [2024-12-07 05:45:55.934179] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.830 [2024-12-07 05:45:55.934269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.830 [2024-12-07 05:45:55.934284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:52.830 [2024-12-07 05:45:55.937528] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.830 [2024-12-07 05:45:55.937679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.830 [2024-12-07 05:45:55.937694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.830 [2024-12-07 05:45:55.940769] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.830 [2024-12-07 05:45:55.940872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.830 [2024-12-07 05:45:55.940887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:52.830 [2024-12-07 05:45:55.944039] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.830 [2024-12-07 05:45:55.944141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.830 [2024-12-07 05:45:55.944156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:52.830 [2024-12-07 05:45:55.947289] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.830 [2024-12-07 05:45:55.947360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.830 [2024-12-07 05:45:55.947375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:52.830 [2024-12-07 05:45:55.950515] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.830 [2024-12-07 05:45:55.950579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.830 [2024-12-07 05:45:55.950594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.830 [2024-12-07 05:45:55.953780] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.830 [2024-12-07 05:45:55.953832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.830 [2024-12-07 05:45:55.953850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:52.830 [2024-12-07 05:45:55.957723] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.830 [2024-12-07 05:45:55.957816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.830 [2024-12-07 05:45:55.957831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:52.830 [2024-12-07 05:45:55.962484] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.830 [2024-12-07 05:45:55.962569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.830 [2024-12-07 05:45:55.962584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:52.830 [2024-12-07 05:45:55.966298] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.830 [2024-12-07 05:45:55.966450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.830 [2024-12-07 05:45:55.966465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.830 [2024-12-07 05:45:55.969565] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.830 [2024-12-07 05:45:55.969675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.830 [2024-12-07 05:45:55.969690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:52.830 [2024-12-07 05:45:55.972848] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.830 [2024-12-07 05:45:55.972950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.831 [2024-12-07 05:45:55.972966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:52.831 [2024-12-07 05:45:55.976073] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.831 [2024-12-07 05:45:55.976151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.831 [2024-12-07 05:45:55.976166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:52.831 [2024-12-07 05:45:55.979271] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.831 [2024-12-07 05:45:55.979343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.831 [2024-12-07 05:45:55.979357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.831 [2024-12-07 05:45:55.984929] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.831 [2024-12-07 05:45:55.985159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.831 [2024-12-07 05:45:55.985174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:52.831 [2024-12-07 05:45:55.988792] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.831 [2024-12-07 05:45:55.988883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.831 [2024-12-07 05:45:55.988898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:52.831 [2024-12-07 05:45:55.992119] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.831 [2024-12-07 05:45:55.992181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.831 [2024-12-07 05:45:55.992196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:52.831 [2024-12-07 05:45:55.995628] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.831 [2024-12-07 05:45:55.995775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.831 [2024-12-07 05:45:55.995791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.831 [2024-12-07 05:45:55.998917] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.831 [2024-12-07 05:45:55.999022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.831 [2024-12-07 05:45:55.999038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:52.831 [2024-12-07 05:45:56.002150] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.831 [2024-12-07 05:45:56.002230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.831 [2024-12-07 05:45:56.002245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:52.831 [2024-12-07 05:45:56.005429] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.831 [2024-12-07 05:45:56.005509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.831 [2024-12-07 05:45:56.005524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:52.831 [2024-12-07 05:45:56.008677] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.831 [2024-12-07 05:45:56.008739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.831 [2024-12-07 05:45:56.008754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.831 [2024-12-07 05:45:56.012719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.831 [2024-12-07 05:45:56.012801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.831 [2024-12-07 05:45:56.012815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:52.831 [2024-12-07 05:45:56.015998] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.831 [2024-12-07 05:45:56.016106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.831 [2024-12-07 05:45:56.016121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:52.831 [2024-12-07 05:45:56.022776] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.831 [2024-12-07 05:45:56.022858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.831 [2024-12-07 05:45:56.022873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:52.831 [2024-12-07 05:45:56.028200] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.831 [2024-12-07 05:45:56.028502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.831 [2024-12-07 05:45:56.028518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.831 [2024-12-07 05:45:56.037255] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.831 [2024-12-07 05:45:56.037534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.831 [2024-12-07 05:45:56.037550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:52.831 [2024-12-07 05:45:56.045470] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.831 [2024-12-07 05:45:56.045758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.831 [2024-12-07 05:45:56.045774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:52.831 [2024-12-07 05:45:56.052126] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.831 [2024-12-07 05:45:56.052212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.831 [2024-12-07 05:45:56.052227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:52.831 [2024-12-07 05:45:56.056057] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.831 [2024-12-07 05:45:56.056109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.831 [2024-12-07 05:45:56.056124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.831 [2024-12-07 05:45:56.062519] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:52.831 [2024-12-07 05:45:56.062583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.831 [2024-12-07 05:45:56.062598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.093 [2024-12-07 05:45:56.069457] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.093 [2024-12-07 05:45:56.069513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.093 [2024-12-07 05:45:56.069528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.093 [2024-12-07 05:45:56.073719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.093 [2024-12-07 05:45:56.073781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.093 [2024-12-07 05:45:56.073799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.093 [2024-12-07 05:45:56.077840] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.093 [2024-12-07 05:45:56.078002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.093 [2024-12-07 05:45:56.078023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.093 [2024-12-07 05:45:56.082523] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.093 [2024-12-07 05:45:56.082652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.093 [2024-12-07 05:45:56.082668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.093 [2024-12-07 05:45:56.089044] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.093 [2024-12-07 05:45:56.089118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.093 [2024-12-07 05:45:56.089133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.093 [2024-12-07 05:45:56.093294] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.093 [2024-12-07 05:45:56.093557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.093 [2024-12-07 05:45:56.093572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.093 [2024-12-07 05:45:56.099795] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.093 [2024-12-07 05:45:56.099862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.093 [2024-12-07 05:45:56.099877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.093 [2024-12-07 05:45:56.105402] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.093 [2024-12-07 05:45:56.105522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.093 [2024-12-07 05:45:56.105537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.093 [2024-12-07 05:45:56.111473] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.093 [2024-12-07 05:45:56.111535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.093 [2024-12-07 05:45:56.111550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.093 [2024-12-07 05:45:56.116925] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.093 [2024-12-07 05:45:56.117006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.093 [2024-12-07 05:45:56.117026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.093 [2024-12-07 05:45:56.121146] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.093 [2024-12-07 05:45:56.121284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.093 [2024-12-07 05:45:56.121300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.093 [2024-12-07 05:45:56.126139] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.093 [2024-12-07 05:45:56.126212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.093 [2024-12-07 05:45:56.126227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.093 [2024-12-07 05:45:56.130482] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.093 [2024-12-07 05:45:56.130615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.093 [2024-12-07 05:45:56.130630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.093 [2024-12-07 05:45:56.139172] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.093 [2024-12-07 05:45:56.139236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.093 [2024-12-07 05:45:56.139252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.093 [2024-12-07 05:45:56.144979] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.093 [2024-12-07 05:45:56.145104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.093 [2024-12-07 05:45:56.145119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.093 [2024-12-07 05:45:56.152291] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.093 [2024-12-07 05:45:56.152599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.093 [2024-12-07 05:45:56.152615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.093 [2024-12-07 05:45:56.161556] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.093 [2024-12-07 05:45:56.161884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.093 [2024-12-07 05:45:56.161900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.094 [2024-12-07 05:45:56.172150] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.094 [2024-12-07 05:45:56.172367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.094 [2024-12-07 05:45:56.172382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.094 [2024-12-07 05:45:56.182988] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.094 [2024-12-07 05:45:56.183268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.094 [2024-12-07 05:45:56.183283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.094 [2024-12-07 05:45:56.192639] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.094 [2024-12-07 05:45:56.192874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.094 [2024-12-07 05:45:56.192889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.094 [2024-12-07 05:45:56.203135] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.094 [2024-12-07 05:45:56.203380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.094 [2024-12-07 05:45:56.203396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.094 [2024-12-07 05:45:56.213471] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.094 [2024-12-07 05:45:56.213773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.094 [2024-12-07 05:45:56.213794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.094 [2024-12-07 05:45:56.222898] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.094 [2024-12-07 05:45:56.222985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.094 [2024-12-07 05:45:56.223000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.094 [2024-12-07 05:45:56.228071] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.094 [2024-12-07 05:45:56.228145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.094 [2024-12-07 05:45:56.228160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.094 [2024-12-07 05:45:56.232576] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.094 [2024-12-07 05:45:56.232676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.094 [2024-12-07 05:45:56.232691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.094 [2024-12-07 05:45:56.236044] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.094 [2024-12-07 05:45:56.236177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.094 [2024-12-07 05:45:56.236193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.094 [2024-12-07 05:45:56.243726] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.094 [2024-12-07 05:45:56.243922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.094 [2024-12-07 05:45:56.243937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.094 [2024-12-07 05:45:56.252058] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.094 [2024-12-07 05:45:56.252119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.094 [2024-12-07 05:45:56.252137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.094 [2024-12-07 05:45:56.258568] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.094 [2024-12-07 05:45:56.258682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.094 [2024-12-07 05:45:56.258698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.094 [2024-12-07 05:45:56.261829] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.094 [2024-12-07 05:45:56.261911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.094 [2024-12-07 05:45:56.261927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.094 [2024-12-07 05:45:56.265105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.094 [2024-12-07 05:45:56.265177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.094 [2024-12-07 05:45:56.265192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.094 [2024-12-07 05:45:56.268414] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.094 [2024-12-07 05:45:56.268493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.094 [2024-12-07 05:45:56.268508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.094 [2024-12-07 05:45:56.271716] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.094 [2024-12-07 05:45:56.271822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.094 [2024-12-07 05:45:56.271837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.094 [2024-12-07 05:45:56.274953] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.094 [2024-12-07 05:45:56.275035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.094 [2024-12-07 05:45:56.275050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.094 [2024-12-07 05:45:56.278976] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.094 [2024-12-07 05:45:56.279231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.094 [2024-12-07 05:45:56.279246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.094 [2024-12-07 05:45:56.285108] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.094 [2024-12-07 05:45:56.285390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.094 [2024-12-07 05:45:56.285405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.094 [2024-12-07 05:45:56.292532] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.094 [2024-12-07 05:45:56.292845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.094 [2024-12-07 05:45:56.292862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.094 [2024-12-07 05:45:56.299664] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.094 [2024-12-07 05:45:56.299913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.094 [2024-12-07 05:45:56.299928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.094 [2024-12-07 05:45:56.305880] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.094 [2024-12-07 05:45:56.305959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.094 [2024-12-07 05:45:56.305974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.094 [2024-12-07 05:45:56.312462] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.094 [2024-12-07 05:45:56.312694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.094 [2024-12-07 05:45:56.312709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.094 [2024-12-07 05:45:56.317985] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.094 [2024-12-07 05:45:56.318086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.094 [2024-12-07 05:45:56.318101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.094 [2024-12-07 05:45:56.323815] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.094 [2024-12-07 05:45:56.323872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.094 [2024-12-07 05:45:56.323887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.357 [2024-12-07 05:45:56.330701] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.357 [2024-12-07 05:45:56.330964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.357 [2024-12-07 05:45:56.330979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.357 [2024-12-07 05:45:56.338463] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.357 [2024-12-07 05:45:56.338537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.357 [2024-12-07 05:45:56.338551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.357 [2024-12-07 05:45:56.344152] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.357 [2024-12-07 05:45:56.344416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.357 [2024-12-07 05:45:56.344430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.357 [2024-12-07 05:45:56.352516] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.357 [2024-12-07 05:45:56.352639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.357 [2024-12-07 05:45:56.352655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.357 [2024-12-07 05:45:56.359416] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.357 [2024-12-07 05:45:56.359683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.357 [2024-12-07 05:45:56.359698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.357 [2024-12-07 05:45:56.363928] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.357 [2024-12-07 05:45:56.364059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.357 [2024-12-07 05:45:56.364074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.357 [2024-12-07 05:45:56.369068] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.357 [2024-12-07 05:45:56.369184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.357 [2024-12-07 05:45:56.369199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.357 [2024-12-07 05:45:56.378397] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.357 [2024-12-07 05:45:56.378477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.357 [2024-12-07 05:45:56.378492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.357 [2024-12-07 05:45:56.385725] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.357 [2024-12-07 05:45:56.385849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.357 [2024-12-07 05:45:56.385864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.357 [2024-12-07 05:45:56.392995] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.357 [2024-12-07 05:45:56.393270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.357 [2024-12-07 05:45:56.393286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.357 [2024-12-07 05:45:56.400839] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.357 [2024-12-07 05:45:56.400888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.357 [2024-12-07 05:45:56.400903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.357 [2024-12-07 05:45:56.405163] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.357 [2024-12-07 05:45:56.405225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.357 [2024-12-07 05:45:56.405243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.357 [2024-12-07 05:45:56.408925] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.357 [2024-12-07 05:45:56.409091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.357 [2024-12-07 05:45:56.409106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.357 [2024-12-07 05:45:56.412686] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.357 [2024-12-07 05:45:56.412775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.357 [2024-12-07 05:45:56.412790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.357 [2024-12-07 05:45:56.418151] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.357 [2024-12-07 05:45:56.418430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.357 [2024-12-07 05:45:56.418446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.357 [2024-12-07 05:45:56.426784] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.357 [2024-12-07 05:45:56.427030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.357 [2024-12-07 05:45:56.427046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.357 [2024-12-07 05:45:56.436370] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.357 [2024-12-07 05:45:56.436650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.357 [2024-12-07 05:45:56.436665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.357 [2024-12-07 05:45:56.446325] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.357 [2024-12-07 05:45:56.446538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.357 [2024-12-07 05:45:56.446554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.357 [2024-12-07 05:45:56.455514] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.357 [2024-12-07 05:45:56.455672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.357 [2024-12-07 05:45:56.455687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.357 [2024-12-07 05:45:56.463478] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.357 [2024-12-07 05:45:56.463811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.357 [2024-12-07 05:45:56.463827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.357 [2024-12-07 05:45:56.471736] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.357 [2024-12-07 05:45:56.471858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.357 [2024-12-07 05:45:56.471873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.357 [2024-12-07 05:45:56.478906] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.357 [2024-12-07 05:45:56.479180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.357 [2024-12-07 05:45:56.479196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.357 [2024-12-07 05:45:56.487448] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.358 [2024-12-07 05:45:56.487738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.358 [2024-12-07 05:45:56.487753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.358 [2024-12-07 05:45:56.496131] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.358 [2024-12-07 05:45:56.496459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.358 [2024-12-07 05:45:56.496476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.358 [2024-12-07 05:45:56.504258] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.358 [2024-12-07 05:45:56.504425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.358 [2024-12-07 05:45:56.504440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.358 [2024-12-07 05:45:56.513464] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.358 [2024-12-07 05:45:56.513528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.358 [2024-12-07 05:45:56.513543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.358 [2024-12-07 05:45:56.522761] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.358 [2024-12-07 05:45:56.522973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.358 [2024-12-07 05:45:56.522988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.358 [2024-12-07 05:45:56.531316] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.358 [2024-12-07 05:45:56.531546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.358 [2024-12-07 05:45:56.531562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.358 [2024-12-07 05:45:56.540597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.358 [2024-12-07 05:45:56.540889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.358 [2024-12-07 05:45:56.540905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.358 [2024-12-07 05:45:56.550547] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.358 [2024-12-07 05:45:56.550814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.358 [2024-12-07 05:45:56.550828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.358 [2024-12-07 05:45:56.559036] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.358 [2024-12-07 05:45:56.559276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.358 [2024-12-07 05:45:56.559290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.358 [2024-12-07 05:45:56.569067] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.358 [2024-12-07 05:45:56.569123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.358 [2024-12-07 05:45:56.569138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.358 [2024-12-07 05:45:56.575470] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.358 [2024-12-07 05:45:56.575721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.358 [2024-12-07 05:45:56.575735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.358 [2024-12-07 05:45:56.582823] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.358 [2024-12-07 05:45:56.583131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.358 [2024-12-07 05:45:56.583154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.358 [2024-12-07 05:45:56.589771] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.358 [2024-12-07 05:45:56.589820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.358 [2024-12-07 05:45:56.589836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.619 [2024-12-07 05:45:56.596092] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.619 [2024-12-07 05:45:56.596349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.619 [2024-12-07 05:45:56.596364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.619 [2024-12-07 05:45:56.604780] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.619 [2024-12-07 05:45:56.604838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.619 [2024-12-07 05:45:56.604853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.619 [2024-12-07 05:45:56.612800] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.619 [2024-12-07 05:45:56.612866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.619 [2024-12-07 05:45:56.612884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.619 [2024-12-07 05:45:56.621510] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.619 [2024-12-07 05:45:56.621774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.619 [2024-12-07 05:45:56.621789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.619 [2024-12-07 05:45:56.630560] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.619 [2024-12-07 05:45:56.630833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.620 [2024-12-07 05:45:56.630847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.620 [2024-12-07 05:45:56.640180] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.620 [2024-12-07 05:45:56.640365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.620 [2024-12-07 05:45:56.640380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.620 [2024-12-07 05:45:56.648993] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.620 [2024-12-07 05:45:56.649256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.620 [2024-12-07 05:45:56.649271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.620 [2024-12-07 05:45:56.657555] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.620 [2024-12-07 05:45:56.657835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.620 [2024-12-07 05:45:56.657851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.620 [2024-12-07 05:45:56.666290] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.620 [2024-12-07 05:45:56.666423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.620 [2024-12-07 05:45:56.666438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.620 [2024-12-07 05:45:56.673371] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.620 [2024-12-07 05:45:56.673612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.620 [2024-12-07 05:45:56.673628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.620 [2024-12-07 05:45:56.681939] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.620 [2024-12-07 05:45:56.682064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.620 [2024-12-07 05:45:56.682079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.620 [2024-12-07 05:45:56.689539] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.620 [2024-12-07 05:45:56.689828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.620 [2024-12-07 05:45:56.689844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.620 [2024-12-07 05:45:56.699625] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.620 [2024-12-07 05:45:56.699712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.620 [2024-12-07 05:45:56.699727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.620 [2024-12-07 05:45:56.709542] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.620 [2024-12-07 05:45:56.709780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.620 [2024-12-07 05:45:56.709794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.620 [2024-12-07 05:45:56.720559] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.620 [2024-12-07 05:45:56.720718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.620 [2024-12-07 05:45:56.720733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.620 [2024-12-07 05:45:56.730737] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.620 [2024-12-07 05:45:56.731002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.620 [2024-12-07 05:45:56.731022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.620 [2024-12-07 05:45:56.740974] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.620 [2024-12-07 05:45:56.741057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.620 [2024-12-07 05:45:56.741071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.620 [2024-12-07 05:45:56.751592] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.620 [2024-12-07 05:45:56.751901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.620 [2024-12-07 05:45:56.751917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.620 [2024-12-07 05:45:56.761440] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.620 [2024-12-07 05:45:56.761795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.620 [2024-12-07 05:45:56.761812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.620 [2024-12-07 05:45:56.771986] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.620 [2024-12-07 05:45:56.772279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.620 [2024-12-07 05:45:56.772296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.620 [2024-12-07 05:45:56.782726] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.620 [2024-12-07 05:45:56.782979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.620 [2024-12-07 05:45:56.782995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.620 [2024-12-07 05:45:56.793533] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.620 [2024-12-07 05:45:56.793735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.620 [2024-12-07 05:45:56.793750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.620 [2024-12-07 05:45:56.803998] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.620 [2024-12-07 05:45:56.804320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.620 [2024-12-07 05:45:56.804335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.620 [2024-12-07 05:45:56.814780] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.620 [2024-12-07 05:45:56.814991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.620 [2024-12-07 05:45:56.815006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.620 [2024-12-07 05:45:56.823698] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.620 [2024-12-07 05:45:56.823815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.620 [2024-12-07 05:45:56.823830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.620 [2024-12-07 05:45:56.833346] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.620 [2024-12-07 05:45:56.833408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.620 [2024-12-07 05:45:56.833424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.620 [2024-12-07 05:45:56.843136] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.620 [2024-12-07 05:45:56.843365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.620 [2024-12-07 05:45:56.843380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.620 [2024-12-07 05:45:56.852529] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.620 [2024-12-07 05:45:56.852876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.620 [2024-12-07 05:45:56.852891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.882 [2024-12-07 05:45:56.862565] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.882 [2024-12-07 05:45:56.862792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.882 [2024-12-07 05:45:56.862812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.882 [2024-12-07 05:45:56.873023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.882 [2024-12-07 05:45:56.873280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.882 [2024-12-07 05:45:56.873295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.882 [2024-12-07 05:45:56.883483] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.882 [2024-12-07 05:45:56.883676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.882 [2024-12-07 05:45:56.883691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.882 [2024-12-07 05:45:56.894250] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.882 [2024-12-07 05:45:56.894390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.882 [2024-12-07 05:45:56.894406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.882 [2024-12-07 05:45:56.904328] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.882 [2024-12-07 05:45:56.904550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.882 [2024-12-07 05:45:56.904565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.882 [2024-12-07 05:45:56.914760] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.882 [2024-12-07 05:45:56.914995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.882 [2024-12-07 05:45:56.915016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.882 [2024-12-07 05:45:56.925213] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.882 [2024-12-07 05:45:56.925495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.882 [2024-12-07 05:45:56.925511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.882 [2024-12-07 05:45:56.935482] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.882 [2024-12-07 05:45:56.935774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.882 [2024-12-07 05:45:56.935796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.882 [2024-12-07 05:45:56.945903] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.882 [2024-12-07 05:45:56.946153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.882 [2024-12-07 05:45:56.946169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.882 [2024-12-07 05:45:56.956499] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.882 [2024-12-07 05:45:56.956755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.882 [2024-12-07 05:45:56.956770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.882 [2024-12-07 05:45:56.966646] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.882 [2024-12-07 05:45:56.966757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.882 [2024-12-07 05:45:56.966772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.882 [2024-12-07 05:45:56.976485] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.882 [2024-12-07 05:45:56.976564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.882 [2024-12-07 05:45:56.976579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.882 [2024-12-07 05:45:56.987044] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.882 [2024-12-07 05:45:56.987284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.882 [2024-12-07 05:45:56.987299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.882 [2024-12-07 05:45:56.998465] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.882 [2024-12-07 05:45:56.998696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.882 [2024-12-07 05:45:56.998711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.882 [2024-12-07 05:45:57.008686] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.882 [2024-12-07 05:45:57.008989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.882 [2024-12-07 05:45:57.009006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.882 [2024-12-07 05:45:57.018682] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.882 [2024-12-07 05:45:57.018958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.882 [2024-12-07 05:45:57.018975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.882 [2024-12-07 05:45:57.029447] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.882 [2024-12-07 05:45:57.029770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.882 [2024-12-07 05:45:57.029787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.882 [2024-12-07 05:45:57.040452] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.882 [2024-12-07 05:45:57.040705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.882 [2024-12-07 05:45:57.040721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.882 [2024-12-07 05:45:57.050778] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.882 [2024-12-07 05:45:57.051103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.882 [2024-12-07 05:45:57.051118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.882 [2024-12-07 05:45:57.061758] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.883 [2024-12-07 05:45:57.062044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.883 [2024-12-07 05:45:57.062059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.883 [2024-12-07 05:45:57.071896] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.883 [2024-12-07 05:45:57.072146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.883 [2024-12-07 05:45:57.072162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:53.883 [2024-12-07 05:45:57.082539] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.883 [2024-12-07 05:45:57.082786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.883 [2024-12-07 05:45:57.082801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.883 [2024-12-07 05:45:57.092651] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.883 [2024-12-07 05:45:57.092925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.883 [2024-12-07 05:45:57.092940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:53.883 [2024-12-07 05:45:57.102613] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.883 [2024-12-07 05:45:57.102911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.883 [2024-12-07 05:45:57.102928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.883 [2024-12-07 05:45:57.113482] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:53.883 [2024-12-07 05:45:57.113739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.883 [2024-12-07 05:45:57.113754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.145 [2024-12-07 05:45:57.123759] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.145 [2024-12-07 05:45:57.124046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.145 [2024-12-07 05:45:57.124061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.145 [2024-12-07 05:45:57.134614] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.145 [2024-12-07 05:45:57.134857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.145 [2024-12-07 05:45:57.134875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.146 [2024-12-07 05:45:57.144662] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.146 [2024-12-07 05:45:57.144892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.146 [2024-12-07 05:45:57.144907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.146 [2024-12-07 05:45:57.155190] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.146 [2024-12-07 05:45:57.155424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.146 [2024-12-07 05:45:57.155439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.146 [2024-12-07 05:45:57.165558] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.146 [2024-12-07 05:45:57.165820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.146 [2024-12-07 05:45:57.165835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.146 [2024-12-07 05:45:57.175761] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.146 [2024-12-07 05:45:57.176026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.146 [2024-12-07 05:45:57.176041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.146 [2024-12-07 05:45:57.186201] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.146 [2024-12-07 05:45:57.186464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.146 [2024-12-07 05:45:57.186479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.146 [2024-12-07 05:45:57.196808] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.146 [2024-12-07 05:45:57.197161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.146 [2024-12-07 05:45:57.197177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.146 [2024-12-07 05:45:57.206686] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.146 [2024-12-07 05:45:57.206964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.146 [2024-12-07 05:45:57.206980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.146 [2024-12-07 05:45:57.217542] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.146 [2024-12-07 05:45:57.217809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.146 [2024-12-07 05:45:57.217825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.146 [2024-12-07 05:45:57.227807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.146 [2024-12-07 05:45:57.228065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.146 [2024-12-07 05:45:57.228084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.146 [2024-12-07 05:45:57.238346] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.146 [2024-12-07 05:45:57.238567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.146 [2024-12-07 05:45:57.238581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.146 [2024-12-07 05:45:57.248890] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.146 [2024-12-07 05:45:57.249125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.146 [2024-12-07 05:45:57.249140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.146 [2024-12-07 05:45:57.259262] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.146 [2024-12-07 05:45:57.259527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.146 [2024-12-07 05:45:57.259551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.146 [2024-12-07 05:45:57.269312] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.146 [2024-12-07 05:45:57.269589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.146 [2024-12-07 05:45:57.269604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.146 [2024-12-07 05:45:57.279129] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.146 [2024-12-07 05:45:57.279407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.146 [2024-12-07 05:45:57.279422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.146 [2024-12-07 05:45:57.289679] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.146 [2024-12-07 05:45:57.289859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.146 [2024-12-07 05:45:57.289874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.146 [2024-12-07 05:45:57.299310] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.146 [2024-12-07 05:45:57.299780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.146 [2024-12-07 05:45:57.299796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.146 [2024-12-07 05:45:57.310055] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.146 [2024-12-07 05:45:57.310431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.146 [2024-12-07 05:45:57.310447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.146 [2024-12-07 05:45:57.320108] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.146 [2024-12-07 05:45:57.320340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.146 [2024-12-07 05:45:57.320355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.146 [2024-12-07 05:45:57.330916] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.146 [2024-12-07 05:45:57.331241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.146 [2024-12-07 05:45:57.331257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.146 [2024-12-07 05:45:57.341380] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.146 [2024-12-07 05:45:57.341716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.146 [2024-12-07 05:45:57.341731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.146 [2024-12-07 05:45:57.351504] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.146 [2024-12-07 05:45:57.351788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.146 [2024-12-07 05:45:57.351803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.146 [2024-12-07 05:45:57.361596] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.146 [2024-12-07 05:45:57.361820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.146 [2024-12-07 05:45:57.361834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.146 [2024-12-07 05:45:57.372376] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.146 [2024-12-07 05:45:57.372624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.146 [2024-12-07 05:45:57.372639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.409 [2024-12-07 05:45:57.382586] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.409 [2024-12-07 05:45:57.382816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.409 [2024-12-07 05:45:57.382831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.409 [2024-12-07 05:45:57.392970] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.409 [2024-12-07 05:45:57.393241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.409 [2024-12-07 05:45:57.393256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.409 [2024-12-07 05:45:57.403490] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.409 [2024-12-07 05:45:57.403789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.409 [2024-12-07 05:45:57.403808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.409 [2024-12-07 05:45:57.413694] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.409 [2024-12-07 05:45:57.413957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.409 [2024-12-07 05:45:57.413972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.409 [2024-12-07 05:45:57.424597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.409 [2024-12-07 05:45:57.424890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.409 [2024-12-07 05:45:57.424905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.409 [2024-12-07 05:45:57.435321] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.409 [2024-12-07 05:45:57.435534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.409 [2024-12-07 05:45:57.435550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.409 [2024-12-07 05:45:57.445336] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.409 [2024-12-07 05:45:57.445630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.409 [2024-12-07 05:45:57.445645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.409 [2024-12-07 05:45:57.455319] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.409 [2024-12-07 05:45:57.455579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.409 [2024-12-07 05:45:57.455594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.409 [2024-12-07 05:45:57.466120] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.409 [2024-12-07 05:45:57.466347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.409 [2024-12-07 05:45:57.466362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.409 [2024-12-07 05:45:57.476798] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.409 [2024-12-07 05:45:57.477047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.409 [2024-12-07 05:45:57.477063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.409 [2024-12-07 05:45:57.487464] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.409 [2024-12-07 05:45:57.487691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.409 [2024-12-07 05:45:57.487706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.410 [2024-12-07 05:45:57.498463] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.410 [2024-12-07 05:45:57.498584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.410 [2024-12-07 05:45:57.498600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.410 [2024-12-07 05:45:57.508747] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.410 [2024-12-07 05:45:57.509037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.410 [2024-12-07 05:45:57.509053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.410 [2024-12-07 05:45:57.519145] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.410 [2024-12-07 05:45:57.519522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.410 [2024-12-07 05:45:57.519538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.410 [2024-12-07 05:45:57.529885] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.410 [2024-12-07 05:45:57.530097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.410 [2024-12-07 05:45:57.530112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.410 [2024-12-07 05:45:57.541953] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.410 [2024-12-07 05:45:57.542165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.410 [2024-12-07 05:45:57.542180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.410 [2024-12-07 05:45:57.552170] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.410 [2024-12-07 05:45:57.552385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.410 [2024-12-07 05:45:57.552399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.410 [2024-12-07 05:45:57.562032] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.410 [2024-12-07 05:45:57.562245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.410 [2024-12-07 05:45:57.562260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.410 [2024-12-07 05:45:57.572203] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.410 [2024-12-07 05:45:57.572481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.410 [2024-12-07 05:45:57.572502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.410 [2024-12-07 05:45:57.579784] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.410 [2024-12-07 05:45:57.579847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.410 [2024-12-07 05:45:57.579863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.410 [2024-12-07 05:45:57.587406] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.410 [2024-12-07 05:45:57.587650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.410 [2024-12-07 05:45:57.587666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:54.410 [2024-12-07 05:45:57.594590] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.410 [2024-12-07 05:45:57.594794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.410 [2024-12-07 05:45:57.594809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:54.410 [2024-12-07 05:45:57.603128] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.410 [2024-12-07 05:45:57.603368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.410 [2024-12-07 05:45:57.603384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.410 [2024-12-07 05:45:57.611079] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa69c40) with pdu=0x2000190fef90 00:30:54.410 [2024-12-07 05:45:57.611139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.410 [2024-12-07 05:45:57.611154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.410 00:30:54.410 Latency(us) 00:30:54.410 [2024-12-07T04:45:57.650Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:54.410 [2024-12-07T04:45:57.650Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:54.410 nvme0n1 : 2.00 4095.76 511.97 0.00 0.00 3900.70 1433.60 14854.83 00:30:54.410 [2024-12-07T04:45:57.650Z] =================================================================================================================== 00:30:54.410 [2024-12-07T04:45:57.650Z] Total : 4095.76 511.97 0.00 0.00 3900.70 1433.60 14854.83 00:30:54.410 0 00:30:54.410 05:45:57 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:54.410 05:45:57 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:54.410 05:45:57 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:54.410 | .driver_specific 00:30:54.410 | .nvme_error 00:30:54.410 | .status_code 00:30:54.410 | .command_transient_transport_error' 00:30:54.410 05:45:57 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:54.671 05:45:57 -- host/digest.sh@71 -- # (( 264 > 0 )) 00:30:54.671 05:45:57 -- host/digest.sh@73 -- # killprocess 2021957 00:30:54.671 05:45:57 -- common/autotest_common.sh@936 -- # '[' -z 2021957 ']' 00:30:54.671 05:45:57 -- common/autotest_common.sh@940 -- # kill -0 2021957 00:30:54.671 05:45:57 -- common/autotest_common.sh@941 -- # uname 00:30:54.671 05:45:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:54.671 05:45:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2021957 00:30:54.671 05:45:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:54.671 05:45:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:54.671 05:45:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2021957' 00:30:54.671 killing process with pid 2021957 00:30:54.671 05:45:57 -- common/autotest_common.sh@955 -- # kill 2021957 00:30:54.671 Received shutdown signal, test time was about 2.000000 seconds 00:30:54.671 00:30:54.671 Latency(us) 00:30:54.671 [2024-12-07T04:45:57.911Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:54.671 [2024-12-07T04:45:57.911Z] =================================================================================================================== 00:30:54.671 [2024-12-07T04:45:57.911Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:54.671 05:45:57 -- common/autotest_common.sh@960 -- # wait 2021957 00:30:54.933 05:45:57 -- host/digest.sh@115 -- # killprocess 2019530 00:30:54.933 05:45:57 -- common/autotest_common.sh@936 -- # '[' -z 2019530 ']' 00:30:54.933 05:45:57 -- common/autotest_common.sh@940 -- # kill -0 2019530 00:30:54.933 05:45:57 -- common/autotest_common.sh@941 -- # uname 00:30:54.933 05:45:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:54.933 05:45:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2019530 00:30:54.933 05:45:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:54.933 05:45:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:54.933 05:45:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2019530' 00:30:54.933 killing process with pid 2019530 00:30:54.933 05:45:58 -- common/autotest_common.sh@955 -- # kill 2019530 00:30:54.933 05:45:58 -- common/autotest_common.sh@960 -- # wait 2019530 00:30:55.193 00:30:55.193 real 0m16.374s 00:30:55.193 user 0m32.111s 00:30:55.193 sys 0m3.571s 00:30:55.193 05:45:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:55.193 05:45:58 -- common/autotest_common.sh@10 -- # set +x 00:30:55.193 ************************************ 00:30:55.193 END TEST nvmf_digest_error 00:30:55.193 ************************************ 00:30:55.193 05:45:58 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:30:55.193 05:45:58 -- host/digest.sh@139 -- # nvmftestfini 00:30:55.193 05:45:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:55.193 05:45:58 -- nvmf/common.sh@116 -- # sync 00:30:55.193 05:45:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:55.193 05:45:58 -- nvmf/common.sh@119 -- # set +e 00:30:55.193 05:45:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:55.193 05:45:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:55.193 rmmod nvme_tcp 00:30:55.193 rmmod nvme_fabrics 00:30:55.193 rmmod nvme_keyring 00:30:55.193 05:45:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:55.193 05:45:58 -- nvmf/common.sh@123 -- # set -e 00:30:55.193 05:45:58 -- nvmf/common.sh@124 -- # return 0 00:30:55.193 05:45:58 -- nvmf/common.sh@477 -- # '[' -n 2019530 ']' 00:30:55.193 05:45:58 -- nvmf/common.sh@478 -- # killprocess 2019530 00:30:55.193 05:45:58 -- common/autotest_common.sh@936 -- # '[' -z 2019530 ']' 00:30:55.193 05:45:58 -- common/autotest_common.sh@940 -- # kill -0 2019530 00:30:55.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2019530) - No such process 00:30:55.193 05:45:58 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2019530 is not found' 00:30:55.193 Process with pid 2019530 is not found 00:30:55.193 05:45:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:55.193 05:45:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:55.193 05:45:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:55.193 05:45:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:55.193 05:45:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:55.193 05:45:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:55.193 05:45:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:55.193 05:45:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:57.739 05:46:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:57.739 00:30:57.739 real 0m42.758s 00:30:57.739 user 1m6.316s 00:30:57.739 sys 0m13.038s 00:30:57.739 05:46:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:57.739 05:46:00 -- common/autotest_common.sh@10 -- # set +x 00:30:57.739 ************************************ 00:30:57.739 END TEST nvmf_digest 00:30:57.739 ************************************ 00:30:57.739 05:46:00 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:30:57.739 05:46:00 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:30:57.739 05:46:00 -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:30:57.739 05:46:00 -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:57.739 05:46:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:57.739 05:46:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:57.739 05:46:00 -- common/autotest_common.sh@10 -- # set +x 00:30:57.739 ************************************ 00:30:57.739 START TEST nvmf_bdevperf 00:30:57.739 ************************************ 00:30:57.739 05:46:00 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:57.739 * Looking for test storage... 00:30:57.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:57.739 05:46:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:30:57.739 05:46:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:30:57.739 05:46:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:30:57.739 05:46:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:30:57.739 05:46:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:30:57.739 05:46:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:30:57.739 05:46:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:30:57.739 05:46:00 -- scripts/common.sh@335 -- # IFS=.-: 00:30:57.739 05:46:00 -- scripts/common.sh@335 -- # read -ra ver1 00:30:57.739 05:46:00 -- scripts/common.sh@336 -- # IFS=.-: 00:30:57.739 05:46:00 -- scripts/common.sh@336 -- # read -ra ver2 00:30:57.739 05:46:00 -- scripts/common.sh@337 -- # local 'op=<' 00:30:57.739 05:46:00 -- scripts/common.sh@339 -- # ver1_l=2 00:30:57.739 05:46:00 -- scripts/common.sh@340 -- # ver2_l=1 00:30:57.739 05:46:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:30:57.739 05:46:00 -- scripts/common.sh@343 -- # case "$op" in 00:30:57.739 05:46:00 -- scripts/common.sh@344 -- # : 1 00:30:57.739 05:46:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:30:57.739 05:46:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:57.739 05:46:00 -- scripts/common.sh@364 -- # decimal 1 00:30:57.739 05:46:00 -- scripts/common.sh@352 -- # local d=1 00:30:57.739 05:46:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:57.739 05:46:00 -- scripts/common.sh@354 -- # echo 1 00:30:57.739 05:46:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:30:57.739 05:46:00 -- scripts/common.sh@365 -- # decimal 2 00:30:57.739 05:46:00 -- scripts/common.sh@352 -- # local d=2 00:30:57.739 05:46:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:57.739 05:46:00 -- scripts/common.sh@354 -- # echo 2 00:30:57.739 05:46:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:30:57.739 05:46:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:30:57.739 05:46:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:30:57.739 05:46:00 -- scripts/common.sh@367 -- # return 0 00:30:57.739 05:46:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:57.739 05:46:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:30:57.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.739 --rc genhtml_branch_coverage=1 00:30:57.739 --rc genhtml_function_coverage=1 00:30:57.739 --rc genhtml_legend=1 00:30:57.739 --rc geninfo_all_blocks=1 00:30:57.739 --rc geninfo_unexecuted_blocks=1 00:30:57.739 00:30:57.739 ' 00:30:57.739 05:46:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:30:57.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.739 --rc genhtml_branch_coverage=1 00:30:57.739 --rc genhtml_function_coverage=1 00:30:57.739 --rc genhtml_legend=1 00:30:57.739 --rc geninfo_all_blocks=1 00:30:57.739 --rc geninfo_unexecuted_blocks=1 00:30:57.739 00:30:57.739 ' 00:30:57.739 05:46:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:30:57.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.739 --rc genhtml_branch_coverage=1 00:30:57.739 --rc genhtml_function_coverage=1 00:30:57.739 --rc genhtml_legend=1 00:30:57.739 --rc geninfo_all_blocks=1 00:30:57.739 --rc geninfo_unexecuted_blocks=1 00:30:57.739 00:30:57.739 ' 00:30:57.739 05:46:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:30:57.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.739 --rc genhtml_branch_coverage=1 00:30:57.739 --rc genhtml_function_coverage=1 00:30:57.740 --rc genhtml_legend=1 00:30:57.740 --rc geninfo_all_blocks=1 00:30:57.740 --rc geninfo_unexecuted_blocks=1 00:30:57.740 00:30:57.740 ' 00:30:57.740 05:46:00 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:57.740 05:46:00 -- nvmf/common.sh@7 -- # uname -s 00:30:57.740 05:46:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:57.740 05:46:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:57.740 05:46:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:57.740 05:46:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:57.740 05:46:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:57.740 05:46:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:57.740 05:46:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:57.740 05:46:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:57.740 05:46:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:57.740 05:46:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:57.740 05:46:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:57.740 05:46:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:57.740 05:46:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:57.740 05:46:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:57.740 05:46:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:57.740 05:46:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:57.740 05:46:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:57.740 05:46:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:57.740 05:46:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:57.740 05:46:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.740 05:46:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.740 05:46:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.740 05:46:00 -- paths/export.sh@5 -- # export PATH 00:30:57.740 05:46:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.740 05:46:00 -- nvmf/common.sh@46 -- # : 0 00:30:57.740 05:46:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:57.740 05:46:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:57.740 05:46:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:57.740 05:46:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:57.740 05:46:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:57.740 05:46:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:57.740 05:46:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:57.740 05:46:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:57.740 05:46:00 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:57.740 05:46:00 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:57.740 05:46:00 -- host/bdevperf.sh@24 -- # nvmftestinit 00:30:57.740 05:46:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:57.740 05:46:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:57.740 05:46:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:57.740 05:46:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:57.740 05:46:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:57.740 05:46:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.740 05:46:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:57.740 05:46:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:57.740 05:46:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:57.740 05:46:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:57.740 05:46:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:57.740 05:46:00 -- common/autotest_common.sh@10 -- # set +x 00:31:06.091 05:46:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:06.091 05:46:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:06.091 05:46:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:06.091 05:46:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:06.091 05:46:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:06.091 05:46:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:06.091 05:46:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:06.091 05:46:07 -- nvmf/common.sh@294 -- # net_devs=() 00:31:06.091 05:46:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:06.091 05:46:07 -- nvmf/common.sh@295 -- # e810=() 00:31:06.091 05:46:07 -- nvmf/common.sh@295 -- # local -ga e810 00:31:06.091 05:46:07 -- nvmf/common.sh@296 -- # x722=() 00:31:06.091 05:46:07 -- nvmf/common.sh@296 -- # local -ga x722 00:31:06.091 05:46:07 -- nvmf/common.sh@297 -- # mlx=() 00:31:06.091 05:46:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:06.091 05:46:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:06.091 05:46:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:06.091 05:46:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:06.091 05:46:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:06.091 05:46:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:06.091 05:46:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:06.091 05:46:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:06.091 05:46:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:06.091 05:46:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:06.091 05:46:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:06.091 05:46:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:06.091 05:46:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:06.091 05:46:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:06.091 05:46:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:06.091 05:46:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:06.091 05:46:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:06.091 05:46:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:06.091 05:46:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:06.091 05:46:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:06.091 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:06.091 05:46:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:06.091 05:46:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:06.091 05:46:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:06.091 05:46:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:06.091 05:46:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:06.091 05:46:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:06.091 05:46:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:06.091 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:06.091 05:46:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:06.091 05:46:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:06.091 05:46:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:06.091 05:46:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:06.091 05:46:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:06.091 05:46:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:06.091 05:46:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:06.091 05:46:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:06.091 05:46:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:06.091 05:46:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:06.091 05:46:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:06.091 05:46:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:06.091 05:46:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:06.091 Found net devices under 0000:31:00.0: cvl_0_0 00:31:06.091 05:46:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:06.091 05:46:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:06.092 05:46:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:06.092 05:46:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:06.092 05:46:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:06.092 05:46:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:06.092 Found net devices under 0000:31:00.1: cvl_0_1 00:31:06.092 05:46:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:06.092 05:46:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:06.092 05:46:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:06.092 05:46:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:06.092 05:46:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:06.092 05:46:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:06.092 05:46:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:06.092 05:46:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:06.092 05:46:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:06.092 05:46:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:06.092 05:46:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:06.092 05:46:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:06.092 05:46:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:06.092 05:46:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:06.092 05:46:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:06.092 05:46:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:06.092 05:46:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:06.092 05:46:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:06.092 05:46:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:06.092 05:46:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:06.092 05:46:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:06.092 05:46:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:06.092 05:46:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:06.092 05:46:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:06.092 05:46:08 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:06.092 05:46:08 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:06.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:06.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.499 ms 00:31:06.092 00:31:06.092 --- 10.0.0.2 ping statistics --- 00:31:06.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:06.092 rtt min/avg/max/mdev = 0.499/0.499/0.499/0.000 ms 00:31:06.092 05:46:08 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:06.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:06.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:31:06.092 00:31:06.092 --- 10.0.0.1 ping statistics --- 00:31:06.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:06.092 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:31:06.092 05:46:08 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:06.092 05:46:08 -- nvmf/common.sh@410 -- # return 0 00:31:06.092 05:46:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:06.092 05:46:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:06.092 05:46:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:06.092 05:46:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:06.092 05:46:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:06.092 05:46:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:06.092 05:46:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:06.092 05:46:08 -- host/bdevperf.sh@25 -- # tgt_init 00:31:06.092 05:46:08 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:06.092 05:46:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:06.092 05:46:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:06.092 05:46:08 -- common/autotest_common.sh@10 -- # set +x 00:31:06.092 05:46:08 -- nvmf/common.sh@469 -- # nvmfpid=2026855 00:31:06.092 05:46:08 -- nvmf/common.sh@470 -- # waitforlisten 2026855 00:31:06.092 05:46:08 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:06.092 05:46:08 -- common/autotest_common.sh@829 -- # '[' -z 2026855 ']' 00:31:06.092 05:46:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:06.092 05:46:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:06.092 05:46:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:06.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:06.092 05:46:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:06.092 05:46:08 -- common/autotest_common.sh@10 -- # set +x 00:31:06.092 [2024-12-07 05:46:08.201591] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:31:06.092 [2024-12-07 05:46:08.201657] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:06.092 EAL: No free 2048 kB hugepages reported on node 1 00:31:06.092 [2024-12-07 05:46:08.291392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:06.092 [2024-12-07 05:46:08.384487] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:06.092 [2024-12-07 05:46:08.384654] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:06.092 [2024-12-07 05:46:08.384666] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:06.092 [2024-12-07 05:46:08.384675] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:06.092 [2024-12-07 05:46:08.384854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:06.092 [2024-12-07 05:46:08.385043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:06.092 [2024-12-07 05:46:08.385051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:06.092 05:46:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:06.092 05:46:08 -- common/autotest_common.sh@862 -- # return 0 00:31:06.092 05:46:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:06.092 05:46:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:06.092 05:46:08 -- common/autotest_common.sh@10 -- # set +x 00:31:06.092 05:46:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:06.092 05:46:09 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:06.092 05:46:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.092 05:46:09 -- common/autotest_common.sh@10 -- # set +x 00:31:06.092 [2024-12-07 05:46:09.035284] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:06.092 05:46:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.092 05:46:09 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:06.092 05:46:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.092 05:46:09 -- common/autotest_common.sh@10 -- # set +x 00:31:06.092 Malloc0 00:31:06.092 05:46:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.092 05:46:09 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:06.092 05:46:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.092 05:46:09 -- common/autotest_common.sh@10 -- # set +x 00:31:06.092 05:46:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.092 05:46:09 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:06.092 05:46:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.092 05:46:09 -- common/autotest_common.sh@10 -- # set +x 00:31:06.092 05:46:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.092 05:46:09 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:06.092 05:46:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.092 05:46:09 -- common/autotest_common.sh@10 -- # set +x 00:31:06.092 [2024-12-07 05:46:09.098516] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:06.092 05:46:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.092 05:46:09 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:31:06.092 05:46:09 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:31:06.092 05:46:09 -- nvmf/common.sh@520 -- # config=() 00:31:06.092 05:46:09 -- nvmf/common.sh@520 -- # local subsystem config 00:31:06.092 05:46:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:06.092 05:46:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:06.092 { 00:31:06.092 "params": { 00:31:06.092 "name": "Nvme$subsystem", 00:31:06.092 "trtype": "$TEST_TRANSPORT", 00:31:06.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:06.092 "adrfam": "ipv4", 00:31:06.092 "trsvcid": "$NVMF_PORT", 00:31:06.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:06.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:06.092 "hdgst": ${hdgst:-false}, 00:31:06.092 "ddgst": ${ddgst:-false} 00:31:06.092 }, 00:31:06.092 "method": "bdev_nvme_attach_controller" 00:31:06.092 } 00:31:06.092 EOF 00:31:06.092 )") 00:31:06.092 05:46:09 -- nvmf/common.sh@542 -- # cat 00:31:06.092 05:46:09 -- nvmf/common.sh@544 -- # jq . 00:31:06.092 05:46:09 -- nvmf/common.sh@545 -- # IFS=, 00:31:06.092 05:46:09 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:06.092 "params": { 00:31:06.092 "name": "Nvme1", 00:31:06.092 "trtype": "tcp", 00:31:06.092 "traddr": "10.0.0.2", 00:31:06.092 "adrfam": "ipv4", 00:31:06.092 "trsvcid": "4420", 00:31:06.092 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:06.092 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:06.092 "hdgst": false, 00:31:06.092 "ddgst": false 00:31:06.092 }, 00:31:06.092 "method": "bdev_nvme_attach_controller" 00:31:06.092 }' 00:31:06.092 [2024-12-07 05:46:09.141629] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:31:06.092 [2024-12-07 05:46:09.141675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2027107 ] 00:31:06.092 EAL: No free 2048 kB hugepages reported on node 1 00:31:06.092 [2024-12-07 05:46:09.201578] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.092 [2024-12-07 05:46:09.264599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:06.663 Running I/O for 1 seconds... 00:31:07.604 00:31:07.604 Latency(us) 00:31:07.604 [2024-12-07T04:46:10.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:07.604 [2024-12-07T04:46:10.844Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:07.604 Verification LBA range: start 0x0 length 0x4000 00:31:07.605 Nvme1n1 : 1.01 13943.82 54.47 0.00 0.00 9135.62 1576.96 12670.29 00:31:07.605 [2024-12-07T04:46:10.845Z] =================================================================================================================== 00:31:07.605 [2024-12-07T04:46:10.845Z] Total : 13943.82 54.47 0.00 0.00 9135.62 1576.96 12670.29 00:31:07.605 05:46:10 -- host/bdevperf.sh@30 -- # bdevperfpid=2027440 00:31:07.605 05:46:10 -- host/bdevperf.sh@32 -- # sleep 3 00:31:07.605 05:46:10 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:31:07.605 05:46:10 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:31:07.605 05:46:10 -- nvmf/common.sh@520 -- # config=() 00:31:07.605 05:46:10 -- nvmf/common.sh@520 -- # local subsystem config 00:31:07.605 05:46:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:07.605 05:46:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:07.605 { 00:31:07.605 "params": { 00:31:07.605 "name": "Nvme$subsystem", 00:31:07.605 "trtype": "$TEST_TRANSPORT", 00:31:07.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:07.605 "adrfam": "ipv4", 00:31:07.605 "trsvcid": "$NVMF_PORT", 00:31:07.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:07.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:07.605 "hdgst": ${hdgst:-false}, 00:31:07.605 "ddgst": ${ddgst:-false} 00:31:07.605 }, 00:31:07.605 "method": "bdev_nvme_attach_controller" 00:31:07.605 } 00:31:07.605 EOF 00:31:07.605 )") 00:31:07.605 05:46:10 -- nvmf/common.sh@542 -- # cat 00:31:07.605 05:46:10 -- nvmf/common.sh@544 -- # jq . 00:31:07.605 05:46:10 -- nvmf/common.sh@545 -- # IFS=, 00:31:07.605 05:46:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:07.605 "params": { 00:31:07.605 "name": "Nvme1", 00:31:07.605 "trtype": "tcp", 00:31:07.605 "traddr": "10.0.0.2", 00:31:07.605 "adrfam": "ipv4", 00:31:07.605 "trsvcid": "4420", 00:31:07.605 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:07.605 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:07.605 "hdgst": false, 00:31:07.605 "ddgst": false 00:31:07.605 }, 00:31:07.605 "method": "bdev_nvme_attach_controller" 00:31:07.605 }' 00:31:07.605 [2024-12-07 05:46:10.789897] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:31:07.605 [2024-12-07 05:46:10.789951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2027440 ] 00:31:07.605 EAL: No free 2048 kB hugepages reported on node 1 00:31:07.903 [2024-12-07 05:46:10.850921] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:07.903 [2024-12-07 05:46:10.911824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:07.903 Running I/O for 15 seconds... 00:31:11.205 05:46:13 -- host/bdevperf.sh@33 -- # kill -9 2026855 00:31:11.205 05:46:13 -- host/bdevperf.sh@35 -- # sleep 3 00:31:11.205 [2024-12-07 05:46:13.758674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:31928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.205 [2024-12-07 05:46:13.758719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.205 [2024-12-07 05:46:13.758741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:31960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.205 [2024-12-07 05:46:13.758752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.205 [2024-12-07 05:46:13.758764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:32512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.205 [2024-12-07 05:46:13.758776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.205 [2024-12-07 05:46:13.758786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:31968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.205 [2024-12-07 05:46:13.758794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.205 [2024-12-07 05:46:13.758805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:31976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.205 [2024-12-07 05:46:13.758813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.205 [2024-12-07 05:46:13.758824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:32000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.205 [2024-12-07 05:46:13.758833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.205 [2024-12-07 05:46:13.758844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:32016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.205 [2024-12-07 05:46:13.758853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.205 [2024-12-07 05:46:13.758864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:32024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.205 [2024-12-07 05:46:13.758871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.205 [2024-12-07 05:46:13.758882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:32040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.205 [2024-12-07 05:46:13.758890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.205 [2024-12-07 05:46:13.758899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:32056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.205 [2024-12-07 05:46:13.758907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.205 [2024-12-07 05:46:13.758917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.205 [2024-12-07 05:46:13.758926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.205 [2024-12-07 05:46:13.758937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:32560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.205 [2024-12-07 05:46:13.758944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.205 [2024-12-07 05:46:13.758955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:32584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.205 [2024-12-07 05:46:13.758970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.205 [2024-12-07 05:46:13.758981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:32592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.205 [2024-12-07 05:46:13.758992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.205 [2024-12-07 05:46:13.759002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:32624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.205 [2024-12-07 05:46:13.759015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.205 [2024-12-07 05:46:13.759027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:32640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.205 [2024-12-07 05:46:13.759038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.205 [2024-12-07 05:46:13.759049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:32648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.205 [2024-12-07 05:46:13.759058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.205 [2024-12-07 05:46:13.759068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:32656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.205 [2024-12-07 05:46:13.759076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.205 [2024-12-07 05:46:13.759085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:32664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.205 [2024-12-07 05:46:13.759093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.205 [2024-12-07 05:46:13.759103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:32672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.205 [2024-12-07 05:46:13.759111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.205 [2024-12-07 05:46:13.759121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.205 [2024-12-07 05:46:13.759128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.205 [2024-12-07 05:46:13.759138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.205 [2024-12-07 05:46:13.759145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.205 [2024-12-07 05:46:13.759155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.205 [2024-12-07 05:46:13.759162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.205 [2024-12-07 05:46:13.759172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.205 [2024-12-07 05:46:13.759179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.205 [2024-12-07 05:46:13.759189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:32112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.205 [2024-12-07 05:46:13.759196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.205 [2024-12-07 05:46:13.759208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:32128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.205 [2024-12-07 05:46:13.759216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.205 [2024-12-07 05:46:13.759226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:32136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.205 [2024-12-07 05:46:13.759234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.205 [2024-12-07 05:46:13.759243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:32160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.205 [2024-12-07 05:46:13.759251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.205 [2024-12-07 05:46:13.759261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.205 [2024-12-07 05:46:13.759268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.205 [2024-12-07 05:46:13.759278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.205 [2024-12-07 05:46:13.759285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.205 [2024-12-07 05:46:13.759295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.205 [2024-12-07 05:46:13.759302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.205 [2024-12-07 05:46:13.759312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.206 [2024-12-07 05:46:13.759319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.206 [2024-12-07 05:46:13.759337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.206 [2024-12-07 05:46:13.759355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:32760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.206 [2024-12-07 05:46:13.759372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:32768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.206 [2024-12-07 05:46:13.759389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.206 [2024-12-07 05:46:13.759405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.206 [2024-12-07 05:46:13.759424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:32792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.206 [2024-12-07 05:46:13.759441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.206 [2024-12-07 05:46:13.759458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.206 [2024-12-07 05:46:13.759474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.206 [2024-12-07 05:46:13.759491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:32216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.206 [2024-12-07 05:46:13.759508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.206 [2024-12-07 05:46:13.759525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:32272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.206 [2024-12-07 05:46:13.759542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.206 [2024-12-07 05:46:13.759558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.206 [2024-12-07 05:46:13.759575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:32808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.206 [2024-12-07 05:46:13.759592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:32816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.206 [2024-12-07 05:46:13.759609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:32824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.206 [2024-12-07 05:46:13.759626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:32832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.206 [2024-12-07 05:46:13.759644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:32840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.206 [2024-12-07 05:46:13.759661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:32848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.206 [2024-12-07 05:46:13.759678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:32856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.206 [2024-12-07 05:46:13.759694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:32864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.206 [2024-12-07 05:46:13.759711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:32872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.206 [2024-12-07 05:46:13.759728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:32312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.206 [2024-12-07 05:46:13.759744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.206 [2024-12-07 05:46:13.759762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.206 [2024-12-07 05:46:13.759778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:32336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.206 [2024-12-07 05:46:13.759795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:32344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.206 [2024-12-07 05:46:13.759812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.206 [2024-12-07 05:46:13.759829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:32368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.206 [2024-12-07 05:46:13.759845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.206 [2024-12-07 05:46:13.759863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:32880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.206 [2024-12-07 05:46:13.759880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:32888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.206 [2024-12-07 05:46:13.759897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:32896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.206 [2024-12-07 05:46:13.759914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:32904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.206 [2024-12-07 05:46:13.759931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.206 [2024-12-07 05:46:13.759947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:32920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.206 [2024-12-07 05:46:13.759964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.206 [2024-12-07 05:46:13.759974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:32928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.207 [2024-12-07 05:46:13.759981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.759991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:32936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.207 [2024-12-07 05:46:13.759998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.207 [2024-12-07 05:46:13.760108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:32952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.207 [2024-12-07 05:46:13.760126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:32960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.207 [2024-12-07 05:46:13.760143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.207 [2024-12-07 05:46:13.760161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:32976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.207 [2024-12-07 05:46:13.760178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:32400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.207 [2024-12-07 05:46:13.760195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:32408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.207 [2024-12-07 05:46:13.760212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.207 [2024-12-07 05:46:13.760229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:32448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.207 [2024-12-07 05:46:13.760245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:32472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.207 [2024-12-07 05:46:13.760263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:32480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.207 [2024-12-07 05:46:13.760279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:32488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.207 [2024-12-07 05:46:13.760296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:32984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.207 [2024-12-07 05:46:13.760313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:32992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.207 [2024-12-07 05:46:13.760329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:33000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.207 [2024-12-07 05:46:13.760346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:32496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.207 [2024-12-07 05:46:13.760363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.207 [2024-12-07 05:46:13.760384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.207 [2024-12-07 05:46:13.760400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:32528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.207 [2024-12-07 05:46:13.760417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.207 [2024-12-07 05:46:13.760434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:32544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.207 [2024-12-07 05:46:13.760450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:32552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.207 [2024-12-07 05:46:13.760468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:33008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.207 [2024-12-07 05:46:13.760484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:33016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.207 [2024-12-07 05:46:13.760500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:33024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.207 [2024-12-07 05:46:13.760517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:33032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.207 [2024-12-07 05:46:13.760534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:33040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.207 [2024-12-07 05:46:13.760551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:33048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.207 [2024-12-07 05:46:13.760568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:33056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.207 [2024-12-07 05:46:13.760586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:33064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.207 [2024-12-07 05:46:13.760604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:33072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.207 [2024-12-07 05:46:13.760621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.207 [2024-12-07 05:46:13.760638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.207 [2024-12-07 05:46:13.760655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:33096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.207 [2024-12-07 05:46:13.760672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:33104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.207 [2024-12-07 05:46:13.760689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:33112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.207 [2024-12-07 05:46:13.760705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.207 [2024-12-07 05:46:13.760714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:33120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.207 [2024-12-07 05:46:13.760721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.208 [2024-12-07 05:46:13.760731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:33128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.208 [2024-12-07 05:46:13.760738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.208 [2024-12-07 05:46:13.760747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:33136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.208 [2024-12-07 05:46:13.760755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.208 [2024-12-07 05:46:13.760764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.208 [2024-12-07 05:46:13.760771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.208 [2024-12-07 05:46:13.760781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:33152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.208 [2024-12-07 05:46:13.760788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.208 [2024-12-07 05:46:13.760798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:33160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.208 [2024-12-07 05:46:13.760806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.208 [2024-12-07 05:46:13.760815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.208 [2024-12-07 05:46:13.760823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.208 [2024-12-07 05:46:13.760832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.208 [2024-12-07 05:46:13.760840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.208 [2024-12-07 05:46:13.760849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.208 [2024-12-07 05:46:13.760857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.208 [2024-12-07 05:46:13.760866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.208 [2024-12-07 05:46:13.760873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.208 [2024-12-07 05:46:13.760883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:33200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.208 [2024-12-07 05:46:13.760890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.208 [2024-12-07 05:46:13.760900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:33208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.208 [2024-12-07 05:46:13.760907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.208 [2024-12-07 05:46:13.760916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:32568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.208 [2024-12-07 05:46:13.760923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.208 [2024-12-07 05:46:13.760933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.208 [2024-12-07 05:46:13.760940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.208 [2024-12-07 05:46:13.760950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:32600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.208 [2024-12-07 05:46:13.760957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.208 [2024-12-07 05:46:13.760966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:32608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.208 [2024-12-07 05:46:13.760974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.208 [2024-12-07 05:46:13.760983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:32616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.208 [2024-12-07 05:46:13.760991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.208 [2024-12-07 05:46:13.761000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:32632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.208 [2024-12-07 05:46:13.761007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.208 [2024-12-07 05:46:13.761022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.208 [2024-12-07 05:46:13.761030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.208 [2024-12-07 05:46:13.761038] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12766f0 is same with the state(5) to be set 00:31:11.208 [2024-12-07 05:46:13.761047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.208 [2024-12-07 05:46:13.761054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.208 [2024-12-07 05:46:13.761061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32736 len:8 PRP1 0x0 PRP2 0x0 00:31:11.208 [2024-12-07 05:46:13.761069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.208 [2024-12-07 05:46:13.761111] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12766f0 was disconnected and freed. reset controller. 00:31:11.208 [2024-12-07 05:46:13.763412] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.208 [2024-12-07 05:46:13.763461] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.208 [2024-12-07 05:46:13.764230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.208 [2024-12-07 05:46:13.764569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.208 [2024-12-07 05:46:13.764582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.208 [2024-12-07 05:46:13.764592] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.208 [2024-12-07 05:46:13.764758] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.208 [2024-12-07 05:46:13.764869] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.208 [2024-12-07 05:46:13.764877] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.208 [2024-12-07 05:46:13.764886] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.208 [2024-12-07 05:46:13.766966] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.208 [2024-12-07 05:46:13.775975] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.208 [2024-12-07 05:46:13.776528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.208 [2024-12-07 05:46:13.776818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.208 [2024-12-07 05:46:13.776832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.208 [2024-12-07 05:46:13.776842] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.208 [2024-12-07 05:46:13.776987] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.208 [2024-12-07 05:46:13.777142] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.208 [2024-12-07 05:46:13.777152] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.208 [2024-12-07 05:46:13.777160] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.208 [2024-12-07 05:46:13.779403] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.208 [2024-12-07 05:46:13.788520] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.208 [2024-12-07 05:46:13.789022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.208 [2024-12-07 05:46:13.789407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.208 [2024-12-07 05:46:13.789445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.208 [2024-12-07 05:46:13.789457] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.208 [2024-12-07 05:46:13.789661] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.208 [2024-12-07 05:46:13.789863] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.208 [2024-12-07 05:46:13.789872] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.208 [2024-12-07 05:46:13.789880] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.208 [2024-12-07 05:46:13.792181] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.208 [2024-12-07 05:46:13.801033] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.208 [2024-12-07 05:46:13.801550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.208 [2024-12-07 05:46:13.801854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.208 [2024-12-07 05:46:13.801864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.208 [2024-12-07 05:46:13.801872] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.208 [2024-12-07 05:46:13.801979] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.208 [2024-12-07 05:46:13.802128] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.208 [2024-12-07 05:46:13.802136] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.208 [2024-12-07 05:46:13.802144] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.208 [2024-12-07 05:46:13.804340] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.208 [2024-12-07 05:46:13.813495] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.208 [2024-12-07 05:46:13.813943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.208 [2024-12-07 05:46:13.814297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.209 [2024-12-07 05:46:13.814308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.209 [2024-12-07 05:46:13.814316] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.209 [2024-12-07 05:46:13.814479] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.209 [2024-12-07 05:46:13.814640] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.209 [2024-12-07 05:46:13.814647] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.209 [2024-12-07 05:46:13.814655] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.209 [2024-12-07 05:46:13.816875] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.209 [2024-12-07 05:46:13.825971] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.209 [2024-12-07 05:46:13.826535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.209 [2024-12-07 05:46:13.826750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.209 [2024-12-07 05:46:13.826765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.209 [2024-12-07 05:46:13.826775] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.209 [2024-12-07 05:46:13.826938] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.209 [2024-12-07 05:46:13.827095] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.209 [2024-12-07 05:46:13.827106] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.209 [2024-12-07 05:46:13.827113] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.209 [2024-12-07 05:46:13.829497] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.209 [2024-12-07 05:46:13.838435] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.209 [2024-12-07 05:46:13.838882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.209 [2024-12-07 05:46:13.839188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.209 [2024-12-07 05:46:13.839200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.209 [2024-12-07 05:46:13.839208] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.209 [2024-12-07 05:46:13.839352] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.209 [2024-12-07 05:46:13.839514] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.209 [2024-12-07 05:46:13.839521] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.209 [2024-12-07 05:46:13.839529] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.209 [2024-12-07 05:46:13.841750] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.209 [2024-12-07 05:46:13.851059] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.209 [2024-12-07 05:46:13.851519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.209 [2024-12-07 05:46:13.851693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.209 [2024-12-07 05:46:13.851705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.209 [2024-12-07 05:46:13.851713] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.209 [2024-12-07 05:46:13.851894] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.209 [2024-12-07 05:46:13.852043] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.209 [2024-12-07 05:46:13.852053] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.209 [2024-12-07 05:46:13.852060] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.209 [2024-12-07 05:46:13.854147] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.209 [2024-12-07 05:46:13.863361] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.209 [2024-12-07 05:46:13.863838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.209 [2024-12-07 05:46:13.864025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.209 [2024-12-07 05:46:13.864036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.209 [2024-12-07 05:46:13.864047] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.209 [2024-12-07 05:46:13.864209] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.209 [2024-12-07 05:46:13.864352] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.209 [2024-12-07 05:46:13.864360] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.209 [2024-12-07 05:46:13.864367] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.209 [2024-12-07 05:46:13.866771] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.209 [2024-12-07 05:46:13.875863] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.209 [2024-12-07 05:46:13.876364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.209 [2024-12-07 05:46:13.876684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.209 [2024-12-07 05:46:13.876699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.209 [2024-12-07 05:46:13.876709] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.209 [2024-12-07 05:46:13.876909] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.209 [2024-12-07 05:46:13.877047] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.209 [2024-12-07 05:46:13.877057] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.209 [2024-12-07 05:46:13.877065] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.209 [2024-12-07 05:46:13.879429] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.209 [2024-12-07 05:46:13.888290] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.209 [2024-12-07 05:46:13.888766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.209 [2024-12-07 05:46:13.889002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.209 [2024-12-07 05:46:13.889023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.209 [2024-12-07 05:46:13.889033] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.209 [2024-12-07 05:46:13.889215] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.209 [2024-12-07 05:46:13.889361] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.209 [2024-12-07 05:46:13.889370] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.209 [2024-12-07 05:46:13.889377] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.209 [2024-12-07 05:46:13.891527] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.209 [2024-12-07 05:46:13.900804] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.209 [2024-12-07 05:46:13.901268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.209 [2024-12-07 05:46:13.901611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.209 [2024-12-07 05:46:13.901621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.209 [2024-12-07 05:46:13.901629] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.209 [2024-12-07 05:46:13.901778] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.209 [2024-12-07 05:46:13.901887] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.209 [2024-12-07 05:46:13.901894] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.209 [2024-12-07 05:46:13.901901] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.209 [2024-12-07 05:46:13.904144] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.209 [2024-12-07 05:46:13.913320] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.210 [2024-12-07 05:46:13.913766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.210 [2024-12-07 05:46:13.914065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.210 [2024-12-07 05:46:13.914076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.210 [2024-12-07 05:46:13.914084] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.210 [2024-12-07 05:46:13.914245] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.210 [2024-12-07 05:46:13.914406] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.210 [2024-12-07 05:46:13.914413] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.210 [2024-12-07 05:46:13.914421] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.210 [2024-12-07 05:46:13.916579] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.210 [2024-12-07 05:46:13.925786] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.210 [2024-12-07 05:46:13.926105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.210 [2024-12-07 05:46:13.926778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.210 [2024-12-07 05:46:13.926797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.210 [2024-12-07 05:46:13.926806] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.210 [2024-12-07 05:46:13.926954] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.210 [2024-12-07 05:46:13.927124] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.210 [2024-12-07 05:46:13.927132] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.210 [2024-12-07 05:46:13.927140] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.210 [2024-12-07 05:46:13.929381] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.210 [2024-12-07 05:46:13.938246] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.210 [2024-12-07 05:46:13.938695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.210 [2024-12-07 05:46:13.939001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.210 [2024-12-07 05:46:13.939017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.210 [2024-12-07 05:46:13.939025] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.210 [2024-12-07 05:46:13.939114] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.210 [2024-12-07 05:46:13.939262] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.210 [2024-12-07 05:46:13.939270] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.210 [2024-12-07 05:46:13.939277] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.210 [2024-12-07 05:46:13.941678] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.210 [2024-12-07 05:46:13.950621] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.210 [2024-12-07 05:46:13.951111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.210 [2024-12-07 05:46:13.951430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.210 [2024-12-07 05:46:13.951441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.210 [2024-12-07 05:46:13.951448] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.210 [2024-12-07 05:46:13.951573] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.210 [2024-12-07 05:46:13.951734] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.210 [2024-12-07 05:46:13.951742] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.210 [2024-12-07 05:46:13.951749] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.210 [2024-12-07 05:46:13.954025] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.210 [2024-12-07 05:46:13.963086] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.210 [2024-12-07 05:46:13.963432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.210 [2024-12-07 05:46:13.963757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.210 [2024-12-07 05:46:13.963767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.210 [2024-12-07 05:46:13.963775] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.210 [2024-12-07 05:46:13.963882] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.210 [2024-12-07 05:46:13.964050] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.210 [2024-12-07 05:46:13.964059] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.210 [2024-12-07 05:46:13.964066] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.210 [2024-12-07 05:46:13.966319] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.210 [2024-12-07 05:46:13.975518] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.210 [2024-12-07 05:46:13.976119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.210 [2024-12-07 05:46:13.976462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.210 [2024-12-07 05:46:13.976475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.210 [2024-12-07 05:46:13.976485] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.210 [2024-12-07 05:46:13.976628] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.210 [2024-12-07 05:46:13.976757] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.210 [2024-12-07 05:46:13.976769] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.210 [2024-12-07 05:46:13.976777] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.210 [2024-12-07 05:46:13.978946] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.210 [2024-12-07 05:46:13.987969] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.210 [2024-12-07 05:46:13.988577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.210 [2024-12-07 05:46:13.988812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.210 [2024-12-07 05:46:13.988825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.210 [2024-12-07 05:46:13.988835] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.210 [2024-12-07 05:46:13.989041] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.210 [2024-12-07 05:46:13.989171] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.210 [2024-12-07 05:46:13.989180] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.210 [2024-12-07 05:46:13.989187] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.210 [2024-12-07 05:46:13.991166] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.210 [2024-12-07 05:46:14.000650] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.210 [2024-12-07 05:46:14.001124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.210 [2024-12-07 05:46:14.001463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.210 [2024-12-07 05:46:14.001476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.210 [2024-12-07 05:46:14.001486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.210 [2024-12-07 05:46:14.001648] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.210 [2024-12-07 05:46:14.001832] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.210 [2024-12-07 05:46:14.001840] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.210 [2024-12-07 05:46:14.001848] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.210 [2024-12-07 05:46:14.004094] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.210 [2024-12-07 05:46:14.013099] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.210 [2024-12-07 05:46:14.013664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.210 [2024-12-07 05:46:14.013900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.210 [2024-12-07 05:46:14.013913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.210 [2024-12-07 05:46:14.013923] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.210 [2024-12-07 05:46:14.014114] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.210 [2024-12-07 05:46:14.014225] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.210 [2024-12-07 05:46:14.014234] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.210 [2024-12-07 05:46:14.014246] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.210 [2024-12-07 05:46:14.016776] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.210 [2024-12-07 05:46:14.025546] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.210 [2024-12-07 05:46:14.025997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.211 [2024-12-07 05:46:14.026331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.211 [2024-12-07 05:46:14.026342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.211 [2024-12-07 05:46:14.026350] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.211 [2024-12-07 05:46:14.026494] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.211 [2024-12-07 05:46:14.026618] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.211 [2024-12-07 05:46:14.026626] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.211 [2024-12-07 05:46:14.026633] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.211 [2024-12-07 05:46:14.028813] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.211 [2024-12-07 05:46:14.037966] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.211 [2024-12-07 05:46:14.038315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.211 [2024-12-07 05:46:14.038519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.211 [2024-12-07 05:46:14.038529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.211 [2024-12-07 05:46:14.038536] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.211 [2024-12-07 05:46:14.038661] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.211 [2024-12-07 05:46:14.038823] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.211 [2024-12-07 05:46:14.038832] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.211 [2024-12-07 05:46:14.038839] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.211 [2024-12-07 05:46:14.040928] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.211 [2024-12-07 05:46:14.050385] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.211 [2024-12-07 05:46:14.050933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.211 [2024-12-07 05:46:14.051331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.211 [2024-12-07 05:46:14.051346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.211 [2024-12-07 05:46:14.051355] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.211 [2024-12-07 05:46:14.051555] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.211 [2024-12-07 05:46:14.051738] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.211 [2024-12-07 05:46:14.051753] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.211 [2024-12-07 05:46:14.051761] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.211 [2024-12-07 05:46:14.054116] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.211 [2024-12-07 05:46:14.062742] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.211 [2024-12-07 05:46:14.063311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.211 [2024-12-07 05:46:14.063631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.211 [2024-12-07 05:46:14.063644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.211 [2024-12-07 05:46:14.063654] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.211 [2024-12-07 05:46:14.063835] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.211 [2024-12-07 05:46:14.064000] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.211 [2024-12-07 05:46:14.064008] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.211 [2024-12-07 05:46:14.064025] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.211 [2024-12-07 05:46:14.066280] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.211 [2024-12-07 05:46:14.075186] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.211 [2024-12-07 05:46:14.075719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.211 [2024-12-07 05:46:14.076051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.211 [2024-12-07 05:46:14.076065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.211 [2024-12-07 05:46:14.076075] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.211 [2024-12-07 05:46:14.076237] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.211 [2024-12-07 05:46:14.076384] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.211 [2024-12-07 05:46:14.076392] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.211 [2024-12-07 05:46:14.076399] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.211 [2024-12-07 05:46:14.078656] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.211 [2024-12-07 05:46:14.087659] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.211 [2024-12-07 05:46:14.088142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.211 [2024-12-07 05:46:14.088514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.211 [2024-12-07 05:46:14.088528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.211 [2024-12-07 05:46:14.088537] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.211 [2024-12-07 05:46:14.088719] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.211 [2024-12-07 05:46:14.088884] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.211 [2024-12-07 05:46:14.088892] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.211 [2024-12-07 05:46:14.088900] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.211 [2024-12-07 05:46:14.091182] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.211 [2024-12-07 05:46:14.100098] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.211 [2024-12-07 05:46:14.100696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.211 [2024-12-07 05:46:14.101026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.211 [2024-12-07 05:46:14.101040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.211 [2024-12-07 05:46:14.101049] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.211 [2024-12-07 05:46:14.101249] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.211 [2024-12-07 05:46:14.101358] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.211 [2024-12-07 05:46:14.101366] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.211 [2024-12-07 05:46:14.101374] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.211 [2024-12-07 05:46:14.103653] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.211 [2024-12-07 05:46:14.112611] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.211 [2024-12-07 05:46:14.113114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.211 [2024-12-07 05:46:14.113489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.211 [2024-12-07 05:46:14.113502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.211 [2024-12-07 05:46:14.113512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.211 [2024-12-07 05:46:14.113675] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.211 [2024-12-07 05:46:14.113803] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.211 [2024-12-07 05:46:14.113812] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.211 [2024-12-07 05:46:14.113819] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.211 [2024-12-07 05:46:14.115951] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.211 [2024-12-07 05:46:14.124873] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.211 [2024-12-07 05:46:14.125432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.211 [2024-12-07 05:46:14.125756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.211 [2024-12-07 05:46:14.125769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.211 [2024-12-07 05:46:14.125778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.211 [2024-12-07 05:46:14.125978] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.211 [2024-12-07 05:46:14.126133] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.211 [2024-12-07 05:46:14.126142] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.211 [2024-12-07 05:46:14.126150] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.211 [2024-12-07 05:46:14.128353] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.211 [2024-12-07 05:46:14.137243] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.211 [2024-12-07 05:46:14.137857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.211 [2024-12-07 05:46:14.138186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.211 [2024-12-07 05:46:14.138200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.211 [2024-12-07 05:46:14.138210] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.212 [2024-12-07 05:46:14.138373] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.212 [2024-12-07 05:46:14.138482] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.212 [2024-12-07 05:46:14.138490] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.212 [2024-12-07 05:46:14.138498] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.212 [2024-12-07 05:46:14.140702] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.212 [2024-12-07 05:46:14.149695] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.212 [2024-12-07 05:46:14.150103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.212 [2024-12-07 05:46:14.150419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.212 [2024-12-07 05:46:14.150429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.212 [2024-12-07 05:46:14.150437] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.212 [2024-12-07 05:46:14.150582] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.212 [2024-12-07 05:46:14.150708] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.212 [2024-12-07 05:46:14.150716] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.212 [2024-12-07 05:46:14.150723] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.212 [2024-12-07 05:46:14.153016] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.212 [2024-12-07 05:46:14.162280] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.212 [2024-12-07 05:46:14.162814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.212 [2024-12-07 05:46:14.163129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.212 [2024-12-07 05:46:14.163144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.212 [2024-12-07 05:46:14.163153] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.212 [2024-12-07 05:46:14.163352] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.212 [2024-12-07 05:46:14.163498] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.212 [2024-12-07 05:46:14.163506] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.212 [2024-12-07 05:46:14.163514] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.212 [2024-12-07 05:46:14.165884] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.212 [2024-12-07 05:46:14.174822] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.212 [2024-12-07 05:46:14.175381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.212 [2024-12-07 05:46:14.175700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.212 [2024-12-07 05:46:14.175714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.212 [2024-12-07 05:46:14.175724] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.212 [2024-12-07 05:46:14.175905] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.212 [2024-12-07 05:46:14.176065] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.212 [2024-12-07 05:46:14.176074] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.212 [2024-12-07 05:46:14.176081] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.212 [2024-12-07 05:46:14.178354] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.212 [2024-12-07 05:46:14.187089] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.212 [2024-12-07 05:46:14.187333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.212 [2024-12-07 05:46:14.187632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.212 [2024-12-07 05:46:14.187643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.212 [2024-12-07 05:46:14.187651] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.212 [2024-12-07 05:46:14.187798] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.212 [2024-12-07 05:46:14.187942] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.212 [2024-12-07 05:46:14.187949] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.212 [2024-12-07 05:46:14.187957] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.212 [2024-12-07 05:46:14.190089] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.212 [2024-12-07 05:46:14.199551] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.212 [2024-12-07 05:46:14.200131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.212 [2024-12-07 05:46:14.200465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.212 [2024-12-07 05:46:14.200478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.212 [2024-12-07 05:46:14.200488] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.212 [2024-12-07 05:46:14.200632] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.212 [2024-12-07 05:46:14.200742] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.212 [2024-12-07 05:46:14.200750] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.212 [2024-12-07 05:46:14.200757] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.212 [2024-12-07 05:46:14.203003] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.212 [2024-12-07 05:46:14.212153] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.212 [2024-12-07 05:46:14.212730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.212 [2024-12-07 05:46:14.212952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.212 [2024-12-07 05:46:14.212965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.212 [2024-12-07 05:46:14.212979] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.212 [2024-12-07 05:46:14.213168] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.212 [2024-12-07 05:46:14.213316] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.212 [2024-12-07 05:46:14.213324] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.212 [2024-12-07 05:46:14.213331] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.212 [2024-12-07 05:46:14.215426] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.212 [2024-12-07 05:46:14.224625] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.212 [2024-12-07 05:46:14.225091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.212 [2024-12-07 05:46:14.225322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.212 [2024-12-07 05:46:14.225336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.212 [2024-12-07 05:46:14.225346] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.212 [2024-12-07 05:46:14.225490] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.212 [2024-12-07 05:46:14.225673] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.212 [2024-12-07 05:46:14.225682] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.212 [2024-12-07 05:46:14.225690] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.212 [2024-12-07 05:46:14.227933] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.212 [2024-12-07 05:46:14.237109] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.212 [2024-12-07 05:46:14.237642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.212 [2024-12-07 05:46:14.237972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.212 [2024-12-07 05:46:14.237985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.212 [2024-12-07 05:46:14.237994] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.212 [2024-12-07 05:46:14.238164] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.212 [2024-12-07 05:46:14.238293] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.212 [2024-12-07 05:46:14.238301] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.212 [2024-12-07 05:46:14.238309] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.212 [2024-12-07 05:46:14.240694] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.212 [2024-12-07 05:46:14.249648] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.212 [2024-12-07 05:46:14.250252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.212 [2024-12-07 05:46:14.250491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.212 [2024-12-07 05:46:14.250505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.212 [2024-12-07 05:46:14.250515] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.212 [2024-12-07 05:46:14.250706] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.212 [2024-12-07 05:46:14.250890] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.212 [2024-12-07 05:46:14.250898] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.213 [2024-12-07 05:46:14.250906] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.213 [2024-12-07 05:46:14.253186] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.213 [2024-12-07 05:46:14.262308] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.213 [2024-12-07 05:46:14.262848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.213 [2024-12-07 05:46:14.263094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.213 [2024-12-07 05:46:14.263108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.213 [2024-12-07 05:46:14.263118] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.213 [2024-12-07 05:46:14.263225] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.213 [2024-12-07 05:46:14.263353] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.213 [2024-12-07 05:46:14.263361] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.213 [2024-12-07 05:46:14.263369] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.213 [2024-12-07 05:46:14.265591] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.213 [2024-12-07 05:46:14.274764] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.213 [2024-12-07 05:46:14.275334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.213 [2024-12-07 05:46:14.275566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.213 [2024-12-07 05:46:14.275578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.213 [2024-12-07 05:46:14.275588] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.213 [2024-12-07 05:46:14.275750] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.213 [2024-12-07 05:46:14.275897] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.213 [2024-12-07 05:46:14.275905] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.213 [2024-12-07 05:46:14.275913] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.213 [2024-12-07 05:46:14.278304] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.213 [2024-12-07 05:46:14.287312] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.213 [2024-12-07 05:46:14.287930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.213 [2024-12-07 05:46:14.288287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.213 [2024-12-07 05:46:14.288302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.213 [2024-12-07 05:46:14.288312] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.213 [2024-12-07 05:46:14.288437] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.213 [2024-12-07 05:46:14.288569] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.213 [2024-12-07 05:46:14.288578] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.213 [2024-12-07 05:46:14.288585] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.213 [2024-12-07 05:46:14.290843] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.213 [2024-12-07 05:46:14.299817] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.213 [2024-12-07 05:46:14.300387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.213 [2024-12-07 05:46:14.300711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.213 [2024-12-07 05:46:14.300724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.213 [2024-12-07 05:46:14.300734] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.213 [2024-12-07 05:46:14.300878] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.213 [2024-12-07 05:46:14.301033] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.213 [2024-12-07 05:46:14.301042] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.213 [2024-12-07 05:46:14.301050] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.213 [2024-12-07 05:46:14.303175] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.213 [2024-12-07 05:46:14.312270] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.213 [2024-12-07 05:46:14.312805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.213 [2024-12-07 05:46:14.313194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.213 [2024-12-07 05:46:14.313208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.213 [2024-12-07 05:46:14.313218] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.213 [2024-12-07 05:46:14.313381] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.213 [2024-12-07 05:46:14.313546] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.213 [2024-12-07 05:46:14.313554] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.213 [2024-12-07 05:46:14.313562] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.213 [2024-12-07 05:46:14.315806] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.213 [2024-12-07 05:46:14.324756] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.213 [2024-12-07 05:46:14.325363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.213 [2024-12-07 05:46:14.325686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.213 [2024-12-07 05:46:14.325699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.213 [2024-12-07 05:46:14.325709] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.213 [2024-12-07 05:46:14.325871] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.213 [2024-12-07 05:46:14.326004] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.213 [2024-12-07 05:46:14.326020] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.213 [2024-12-07 05:46:14.326028] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.213 [2024-12-07 05:46:14.328302] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.213 [2024-12-07 05:46:14.337473] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.213 [2024-12-07 05:46:14.338053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.213 [2024-12-07 05:46:14.338435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.213 [2024-12-07 05:46:14.338448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.213 [2024-12-07 05:46:14.338457] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.213 [2024-12-07 05:46:14.338564] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.213 [2024-12-07 05:46:14.338710] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.213 [2024-12-07 05:46:14.338718] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.213 [2024-12-07 05:46:14.338725] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.213 [2024-12-07 05:46:14.341190] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.213 [2024-12-07 05:46:14.350139] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.213 [2024-12-07 05:46:14.350644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.213 [2024-12-07 05:46:14.351022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.213 [2024-12-07 05:46:14.351036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.213 [2024-12-07 05:46:14.351046] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.213 [2024-12-07 05:46:14.351209] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.213 [2024-12-07 05:46:14.351393] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.213 [2024-12-07 05:46:14.351402] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.213 [2024-12-07 05:46:14.351410] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.213 [2024-12-07 05:46:14.353847] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.213 [2024-12-07 05:46:14.362491] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.213 [2024-12-07 05:46:14.363067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.213 [2024-12-07 05:46:14.363449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.213 [2024-12-07 05:46:14.363463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.213 [2024-12-07 05:46:14.363473] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.213 [2024-12-07 05:46:14.363616] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.213 [2024-12-07 05:46:14.363781] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.213 [2024-12-07 05:46:14.363789] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.213 [2024-12-07 05:46:14.363801] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.213 [2024-12-07 05:46:14.365881] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.214 [2024-12-07 05:46:14.374918] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.214 [2024-12-07 05:46:14.375473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.214 [2024-12-07 05:46:14.375791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.214 [2024-12-07 05:46:14.375804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.214 [2024-12-07 05:46:14.375814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.214 [2024-12-07 05:46:14.375977] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.214 [2024-12-07 05:46:14.376134] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.214 [2024-12-07 05:46:14.376143] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.214 [2024-12-07 05:46:14.376151] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.214 [2024-12-07 05:46:14.378243] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.214 [2024-12-07 05:46:14.387336] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.214 [2024-12-07 05:46:14.387836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.214 [2024-12-07 05:46:14.388120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.214 [2024-12-07 05:46:14.388131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.214 [2024-12-07 05:46:14.388140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.214 [2024-12-07 05:46:14.388284] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.214 [2024-12-07 05:46:14.388483] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.214 [2024-12-07 05:46:14.388490] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.214 [2024-12-07 05:46:14.388497] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.214 [2024-12-07 05:46:14.390782] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.214 [2024-12-07 05:46:14.399846] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.214 [2024-12-07 05:46:14.400366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.214 [2024-12-07 05:46:14.400765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.214 [2024-12-07 05:46:14.400778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.214 [2024-12-07 05:46:14.400788] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.214 [2024-12-07 05:46:14.400970] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.214 [2024-12-07 05:46:14.401104] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.214 [2024-12-07 05:46:14.401113] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.214 [2024-12-07 05:46:14.401126] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.214 [2024-12-07 05:46:14.403623] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.214 [2024-12-07 05:46:14.412402] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.214 [2024-12-07 05:46:14.412894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.214 [2024-12-07 05:46:14.413093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.214 [2024-12-07 05:46:14.413104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.214 [2024-12-07 05:46:14.413112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.214 [2024-12-07 05:46:14.413200] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.214 [2024-12-07 05:46:14.413288] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.214 [2024-12-07 05:46:14.413296] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.214 [2024-12-07 05:46:14.413303] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.214 [2024-12-07 05:46:14.415446] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.214 [2024-12-07 05:46:14.424898] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.214 [2024-12-07 05:46:14.425424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.214 [2024-12-07 05:46:14.425744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.214 [2024-12-07 05:46:14.425758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.214 [2024-12-07 05:46:14.425767] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.214 [2024-12-07 05:46:14.425948] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.214 [2024-12-07 05:46:14.426102] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.214 [2024-12-07 05:46:14.426111] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.214 [2024-12-07 05:46:14.426119] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.214 [2024-12-07 05:46:14.428488] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.478 [2024-12-07 05:46:14.437376] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.478 [2024-12-07 05:46:14.437957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.478 [2024-12-07 05:46:14.438307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.478 [2024-12-07 05:46:14.438321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.478 [2024-12-07 05:46:14.438331] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.478 [2024-12-07 05:46:14.438494] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.478 [2024-12-07 05:46:14.438640] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.478 [2024-12-07 05:46:14.438648] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.478 [2024-12-07 05:46:14.438656] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.478 [2024-12-07 05:46:14.440802] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.478 [2024-12-07 05:46:14.450007] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.478 [2024-12-07 05:46:14.450457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.478 [2024-12-07 05:46:14.450733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.478 [2024-12-07 05:46:14.450744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.478 [2024-12-07 05:46:14.450752] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.478 [2024-12-07 05:46:14.450877] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.478 [2024-12-07 05:46:14.451043] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.478 [2024-12-07 05:46:14.451052] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.478 [2024-12-07 05:46:14.451059] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.478 [2024-12-07 05:46:14.453235] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.478 [2024-12-07 05:46:14.462452] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.478 [2024-12-07 05:46:14.462999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.479 [2024-12-07 05:46:14.463350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.479 [2024-12-07 05:46:14.463364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.479 [2024-12-07 05:46:14.463373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.479 [2024-12-07 05:46:14.463574] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.479 [2024-12-07 05:46:14.463756] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.479 [2024-12-07 05:46:14.463765] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.479 [2024-12-07 05:46:14.463773] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.479 [2024-12-07 05:46:14.466038] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.479 [2024-12-07 05:46:14.474950] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.479 [2024-12-07 05:46:14.475536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.479 [2024-12-07 05:46:14.475860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.479 [2024-12-07 05:46:14.475873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.479 [2024-12-07 05:46:14.475882] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.479 [2024-12-07 05:46:14.476008] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.479 [2024-12-07 05:46:14.476165] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.479 [2024-12-07 05:46:14.476174] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.479 [2024-12-07 05:46:14.476181] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.479 [2024-12-07 05:46:14.478366] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.479 [2024-12-07 05:46:14.487204] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.479 [2024-12-07 05:46:14.487654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.479 [2024-12-07 05:46:14.487951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.479 [2024-12-07 05:46:14.487961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.479 [2024-12-07 05:46:14.487969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.479 [2024-12-07 05:46:14.488119] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.479 [2024-12-07 05:46:14.488244] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.479 [2024-12-07 05:46:14.488251] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.479 [2024-12-07 05:46:14.488259] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.479 [2024-12-07 05:46:14.490492] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.479 [2024-12-07 05:46:14.499661] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.479 [2024-12-07 05:46:14.500125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.479 [2024-12-07 05:46:14.500509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.479 [2024-12-07 05:46:14.500519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.479 [2024-12-07 05:46:14.500526] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.479 [2024-12-07 05:46:14.500687] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.479 [2024-12-07 05:46:14.500831] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.479 [2024-12-07 05:46:14.500838] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.479 [2024-12-07 05:46:14.500846] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.479 [2024-12-07 05:46:14.503391] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.479 [2024-12-07 05:46:14.512054] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.479 [2024-12-07 05:46:14.512550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.479 [2024-12-07 05:46:14.512870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.479 [2024-12-07 05:46:14.512883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.479 [2024-12-07 05:46:14.512893] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.479 [2024-12-07 05:46:14.513062] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.479 [2024-12-07 05:46:14.513211] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.479 [2024-12-07 05:46:14.513219] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.479 [2024-12-07 05:46:14.513227] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.479 [2024-12-07 05:46:14.515523] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.479 [2024-12-07 05:46:14.524703] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.479 [2024-12-07 05:46:14.525347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.479 [2024-12-07 05:46:14.525679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.479 [2024-12-07 05:46:14.525692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.479 [2024-12-07 05:46:14.525702] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.479 [2024-12-07 05:46:14.525882] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.479 [2024-12-07 05:46:14.526020] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.479 [2024-12-07 05:46:14.526029] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.479 [2024-12-07 05:46:14.526036] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.479 [2024-12-07 05:46:14.528254] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.479 [2024-12-07 05:46:14.537222] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.479 [2024-12-07 05:46:14.537774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.479 [2024-12-07 05:46:14.538093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.479 [2024-12-07 05:46:14.538109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.479 [2024-12-07 05:46:14.538119] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.479 [2024-12-07 05:46:14.538318] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.479 [2024-12-07 05:46:14.538409] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.479 [2024-12-07 05:46:14.538418] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.479 [2024-12-07 05:46:14.538426] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.479 [2024-12-07 05:46:14.540665] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.479 [2024-12-07 05:46:14.549631] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.479 [2024-12-07 05:46:14.550125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.479 [2024-12-07 05:46:14.550496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.479 [2024-12-07 05:46:14.550511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.479 [2024-12-07 05:46:14.550520] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.479 [2024-12-07 05:46:14.550683] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.479 [2024-12-07 05:46:14.550812] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.479 [2024-12-07 05:46:14.550820] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.479 [2024-12-07 05:46:14.550828] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.479 [2024-12-07 05:46:14.553186] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.479 [2024-12-07 05:46:14.562025] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.479 [2024-12-07 05:46:14.562612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.479 [2024-12-07 05:46:14.562925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.479 [2024-12-07 05:46:14.562938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.479 [2024-12-07 05:46:14.562952] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.479 [2024-12-07 05:46:14.563142] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.479 [2024-12-07 05:46:14.563270] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.479 [2024-12-07 05:46:14.563279] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.479 [2024-12-07 05:46:14.563286] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.479 [2024-12-07 05:46:14.565637] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.479 [2024-12-07 05:46:14.574551] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.479 [2024-12-07 05:46:14.575105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.479 [2024-12-07 05:46:14.575482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.479 [2024-12-07 05:46:14.575495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.479 [2024-12-07 05:46:14.575504] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.480 [2024-12-07 05:46:14.575685] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.480 [2024-12-07 05:46:14.575812] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.480 [2024-12-07 05:46:14.575821] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.480 [2024-12-07 05:46:14.575828] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.480 [2024-12-07 05:46:14.578091] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.480 [2024-12-07 05:46:14.587175] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.480 [2024-12-07 05:46:14.587623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.480 [2024-12-07 05:46:14.587940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.480 [2024-12-07 05:46:14.587951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.480 [2024-12-07 05:46:14.587958] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.480 [2024-12-07 05:46:14.588089] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.480 [2024-12-07 05:46:14.588234] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.480 [2024-12-07 05:46:14.588243] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.480 [2024-12-07 05:46:14.588250] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.480 [2024-12-07 05:46:14.590557] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.480 [2024-12-07 05:46:14.599592] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.480 [2024-12-07 05:46:14.600113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.480 [2024-12-07 05:46:14.600435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.480 [2024-12-07 05:46:14.600448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.480 [2024-12-07 05:46:14.600458] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.480 [2024-12-07 05:46:14.600625] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.480 [2024-12-07 05:46:14.600772] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.480 [2024-12-07 05:46:14.600780] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.480 [2024-12-07 05:46:14.600788] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.480 [2024-12-07 05:46:14.603182] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.480 [2024-12-07 05:46:14.611969] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.480 [2024-12-07 05:46:14.612341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.480 [2024-12-07 05:46:14.612661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.480 [2024-12-07 05:46:14.612671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.480 [2024-12-07 05:46:14.612679] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.480 [2024-12-07 05:46:14.612859] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.480 [2024-12-07 05:46:14.612966] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.480 [2024-12-07 05:46:14.612974] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.480 [2024-12-07 05:46:14.612981] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.480 [2024-12-07 05:46:14.615348] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.480 [2024-12-07 05:46:14.624560] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.480 [2024-12-07 05:46:14.624979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.480 [2024-12-07 05:46:14.625350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.480 [2024-12-07 05:46:14.625364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.480 [2024-12-07 05:46:14.625374] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.480 [2024-12-07 05:46:14.625517] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.480 [2024-12-07 05:46:14.625682] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.480 [2024-12-07 05:46:14.625690] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.480 [2024-12-07 05:46:14.625698] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.480 [2024-12-07 05:46:14.627977] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.480 [2024-12-07 05:46:14.636910] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.480 [2024-12-07 05:46:14.637430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.480 [2024-12-07 05:46:14.637755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.480 [2024-12-07 05:46:14.637768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.480 [2024-12-07 05:46:14.637778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.480 [2024-12-07 05:46:14.637963] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.480 [2024-12-07 05:46:14.638155] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.480 [2024-12-07 05:46:14.638164] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.480 [2024-12-07 05:46:14.638172] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.480 [2024-12-07 05:46:14.640409] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.480 [2024-12-07 05:46:14.649285] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.480 [2024-12-07 05:46:14.649866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.480 [2024-12-07 05:46:14.650097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.480 [2024-12-07 05:46:14.650111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.480 [2024-12-07 05:46:14.650121] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.480 [2024-12-07 05:46:14.650284] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.480 [2024-12-07 05:46:14.650393] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.480 [2024-12-07 05:46:14.650402] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.480 [2024-12-07 05:46:14.650409] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.480 [2024-12-07 05:46:14.652633] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.480 [2024-12-07 05:46:14.661795] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.480 [2024-12-07 05:46:14.662412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.480 [2024-12-07 05:46:14.662732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.480 [2024-12-07 05:46:14.662746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.480 [2024-12-07 05:46:14.662756] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.480 [2024-12-07 05:46:14.662900] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.480 [2024-12-07 05:46:14.663055] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.480 [2024-12-07 05:46:14.663065] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.480 [2024-12-07 05:46:14.663072] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.480 [2024-12-07 05:46:14.665203] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.480 [2024-12-07 05:46:14.674415] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.480 [2024-12-07 05:46:14.674984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.480 [2024-12-07 05:46:14.675326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.480 [2024-12-07 05:46:14.675340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.480 [2024-12-07 05:46:14.675350] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.480 [2024-12-07 05:46:14.675513] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.480 [2024-12-07 05:46:14.675646] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.480 [2024-12-07 05:46:14.675654] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.480 [2024-12-07 05:46:14.675662] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.480 [2024-12-07 05:46:14.678073] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.480 [2024-12-07 05:46:14.686914] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.480 [2024-12-07 05:46:14.687420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.480 [2024-12-07 05:46:14.687739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.480 [2024-12-07 05:46:14.687753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.480 [2024-12-07 05:46:14.687762] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.480 [2024-12-07 05:46:14.687906] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.480 [2024-12-07 05:46:14.688097] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.481 [2024-12-07 05:46:14.688106] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.481 [2024-12-07 05:46:14.688114] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.481 [2024-12-07 05:46:14.690335] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.481 [2024-12-07 05:46:14.699500] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.481 [2024-12-07 05:46:14.700099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.481 [2024-12-07 05:46:14.700427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.481 [2024-12-07 05:46:14.700440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.481 [2024-12-07 05:46:14.700450] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.481 [2024-12-07 05:46:14.700668] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.481 [2024-12-07 05:46:14.700833] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.481 [2024-12-07 05:46:14.700841] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.481 [2024-12-07 05:46:14.700850] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.481 [2024-12-07 05:46:14.703112] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.481 [2024-12-07 05:46:14.712077] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.481 [2024-12-07 05:46:14.712652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.481 [2024-12-07 05:46:14.712974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.481 [2024-12-07 05:46:14.712988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.481 [2024-12-07 05:46:14.712997] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.481 [2024-12-07 05:46:14.713148] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.481 [2024-12-07 05:46:14.713295] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.481 [2024-12-07 05:46:14.713308] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.481 [2024-12-07 05:46:14.713316] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.743 [2024-12-07 05:46:14.715499] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.743 [2024-12-07 05:46:14.724453] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.743 [2024-12-07 05:46:14.724944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-12-07 05:46:14.725306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-12-07 05:46:14.725317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.743 [2024-12-07 05:46:14.725325] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.743 [2024-12-07 05:46:14.725468] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.743 [2024-12-07 05:46:14.725538] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.743 [2024-12-07 05:46:14.725545] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.743 [2024-12-07 05:46:14.725552] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.743 [2024-12-07 05:46:14.727974] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.743 [2024-12-07 05:46:14.737135] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.744 [2024-12-07 05:46:14.737503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-12-07 05:46:14.737824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-12-07 05:46:14.737835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.744 [2024-12-07 05:46:14.737842] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.744 [2024-12-07 05:46:14.737931] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.744 [2024-12-07 05:46:14.738083] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.744 [2024-12-07 05:46:14.738091] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.744 [2024-12-07 05:46:14.738099] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.744 [2024-12-07 05:46:14.740311] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.744 [2024-12-07 05:46:14.749669] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.744 [2024-12-07 05:46:14.750121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-12-07 05:46:14.750439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-12-07 05:46:14.750453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.744 [2024-12-07 05:46:14.750462] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.744 [2024-12-07 05:46:14.750624] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.744 [2024-12-07 05:46:14.750789] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.744 [2024-12-07 05:46:14.750797] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.744 [2024-12-07 05:46:14.750809] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.744 [2024-12-07 05:46:14.752997] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.744 [2024-12-07 05:46:14.762222] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.744 [2024-12-07 05:46:14.762768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-12-07 05:46:14.763096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-12-07 05:46:14.763111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.744 [2024-12-07 05:46:14.763120] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.744 [2024-12-07 05:46:14.763283] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.744 [2024-12-07 05:46:14.763429] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.744 [2024-12-07 05:46:14.763437] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.744 [2024-12-07 05:46:14.763445] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.744 [2024-12-07 05:46:14.765538] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.744 [2024-12-07 05:46:14.774680] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.744 [2024-12-07 05:46:14.775315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-12-07 05:46:14.775679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-12-07 05:46:14.775691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.744 [2024-12-07 05:46:14.775701] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.744 [2024-12-07 05:46:14.775864] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.744 [2024-12-07 05:46:14.776036] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.744 [2024-12-07 05:46:14.776045] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.744 [2024-12-07 05:46:14.776053] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.744 [2024-12-07 05:46:14.778384] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.744 [2024-12-07 05:46:14.787223] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.744 [2024-12-07 05:46:14.787783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-12-07 05:46:14.788132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-12-07 05:46:14.788146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.744 [2024-12-07 05:46:14.788156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.744 [2024-12-07 05:46:14.788319] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.744 [2024-12-07 05:46:14.788501] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.744 [2024-12-07 05:46:14.788509] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.744 [2024-12-07 05:46:14.788517] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.744 [2024-12-07 05:46:14.790647] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.744 [2024-12-07 05:46:14.799877] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.744 [2024-12-07 05:46:14.800497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-12-07 05:46:14.800822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-12-07 05:46:14.800835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.744 [2024-12-07 05:46:14.800844] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.744 [2024-12-07 05:46:14.801033] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.744 [2024-12-07 05:46:14.801143] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.744 [2024-12-07 05:46:14.801151] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.744 [2024-12-07 05:46:14.801159] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.744 [2024-12-07 05:46:14.803377] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.744 [2024-12-07 05:46:14.812412] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.744 [2024-12-07 05:46:14.812979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-12-07 05:46:14.813332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-12-07 05:46:14.813345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.744 [2024-12-07 05:46:14.813355] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.744 [2024-12-07 05:46:14.813536] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.744 [2024-12-07 05:46:14.813683] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.744 [2024-12-07 05:46:14.813691] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.744 [2024-12-07 05:46:14.813699] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.744 [2024-12-07 05:46:14.815940] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.744 [2024-12-07 05:46:14.824869] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.744 [2024-12-07 05:46:14.825395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-12-07 05:46:14.825729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-12-07 05:46:14.825742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.744 [2024-12-07 05:46:14.825752] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.744 [2024-12-07 05:46:14.825895] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.744 [2024-12-07 05:46:14.826068] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.745 [2024-12-07 05:46:14.826077] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.745 [2024-12-07 05:46:14.826085] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.745 [2024-12-07 05:46:14.828340] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.745 [2024-12-07 05:46:14.837548] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.745 [2024-12-07 05:46:14.838105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-12-07 05:46:14.838437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-12-07 05:46:14.838450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.745 [2024-12-07 05:46:14.838460] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.745 [2024-12-07 05:46:14.838604] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.745 [2024-12-07 05:46:14.838751] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.745 [2024-12-07 05:46:14.838759] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.745 [2024-12-07 05:46:14.838767] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.745 [2024-12-07 05:46:14.841142] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.745 [2024-12-07 05:46:14.850165] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.745 [2024-12-07 05:46:14.850722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-12-07 05:46:14.851053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-12-07 05:46:14.851068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.745 [2024-12-07 05:46:14.851077] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.745 [2024-12-07 05:46:14.851165] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.745 [2024-12-07 05:46:14.851331] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.745 [2024-12-07 05:46:14.851339] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.745 [2024-12-07 05:46:14.851346] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.745 [2024-12-07 05:46:14.853440] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.745 [2024-12-07 05:46:14.862548] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.745 [2024-12-07 05:46:14.863084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-12-07 05:46:14.863480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-12-07 05:46:14.863493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.745 [2024-12-07 05:46:14.863503] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.745 [2024-12-07 05:46:14.863684] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.745 [2024-12-07 05:46:14.863775] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.745 [2024-12-07 05:46:14.863783] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.745 [2024-12-07 05:46:14.863791] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.745 [2024-12-07 05:46:14.866040] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.745 [2024-12-07 05:46:14.875286] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.745 [2024-12-07 05:46:14.875890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-12-07 05:46:14.876304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-12-07 05:46:14.876318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.745 [2024-12-07 05:46:14.876328] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.745 [2024-12-07 05:46:14.876454] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.745 [2024-12-07 05:46:14.876619] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.745 [2024-12-07 05:46:14.876627] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.745 [2024-12-07 05:46:14.876634] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.745 [2024-12-07 05:46:14.879022] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.745 [2024-12-07 05:46:14.887773] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.745 [2024-12-07 05:46:14.888343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-12-07 05:46:14.888669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-12-07 05:46:14.888682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.745 [2024-12-07 05:46:14.888692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.745 [2024-12-07 05:46:14.888909] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.745 [2024-12-07 05:46:14.889064] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.745 [2024-12-07 05:46:14.889074] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.745 [2024-12-07 05:46:14.889081] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.745 [2024-12-07 05:46:14.891450] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.745 [2024-12-07 05:46:14.900214] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.745 [2024-12-07 05:46:14.900764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-12-07 05:46:14.901082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-12-07 05:46:14.901097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.745 [2024-12-07 05:46:14.901107] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.745 [2024-12-07 05:46:14.901307] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.745 [2024-12-07 05:46:14.901415] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.745 [2024-12-07 05:46:14.901424] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.745 [2024-12-07 05:46:14.901432] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.745 [2024-12-07 05:46:14.903690] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.745 [2024-12-07 05:46:14.912835] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.745 [2024-12-07 05:46:14.913268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-12-07 05:46:14.913631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-12-07 05:46:14.913649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.745 [2024-12-07 05:46:14.913658] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.745 [2024-12-07 05:46:14.913803] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.745 [2024-12-07 05:46:14.913968] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.745 [2024-12-07 05:46:14.913976] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.745 [2024-12-07 05:46:14.913984] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.745 [2024-12-07 05:46:14.916264] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.745 [2024-12-07 05:46:14.925208] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.745 [2024-12-07 05:46:14.925796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-12-07 05:46:14.926133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-12-07 05:46:14.926148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.745 [2024-12-07 05:46:14.926157] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.745 [2024-12-07 05:46:14.926338] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.745 [2024-12-07 05:46:14.926429] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.745 [2024-12-07 05:46:14.926437] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.746 [2024-12-07 05:46:14.926444] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.746 [2024-12-07 05:46:14.928466] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.746 [2024-12-07 05:46:14.937678] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.746 [2024-12-07 05:46:14.938347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-12-07 05:46:14.938679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-12-07 05:46:14.938692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.746 [2024-12-07 05:46:14.938701] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.746 [2024-12-07 05:46:14.938901] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.746 [2024-12-07 05:46:14.939075] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.746 [2024-12-07 05:46:14.939085] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.746 [2024-12-07 05:46:14.939093] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.746 [2024-12-07 05:46:14.941108] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.746 [2024-12-07 05:46:14.949849] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.746 [2024-12-07 05:46:14.950407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-12-07 05:46:14.950730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-12-07 05:46:14.950743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.746 [2024-12-07 05:46:14.950758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.746 [2024-12-07 05:46:14.950921] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.746 [2024-12-07 05:46:14.951075] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.746 [2024-12-07 05:46:14.951085] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.746 [2024-12-07 05:46:14.951092] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.746 [2024-12-07 05:46:14.953244] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.746 [2024-12-07 05:46:14.962246] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.746 [2024-12-07 05:46:14.962586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-12-07 05:46:14.962891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-12-07 05:46:14.962901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.746 [2024-12-07 05:46:14.962909] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.746 [2024-12-07 05:46:14.963079] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.746 [2024-12-07 05:46:14.963223] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.746 [2024-12-07 05:46:14.963232] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.746 [2024-12-07 05:46:14.963239] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.746 [2024-12-07 05:46:14.965400] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.746 [2024-12-07 05:46:14.974625] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.746 [2024-12-07 05:46:14.975039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-12-07 05:46:14.975344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-12-07 05:46:14.975355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:11.746 [2024-12-07 05:46:14.975363] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:11.746 [2024-12-07 05:46:14.975524] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:11.746 [2024-12-07 05:46:14.975649] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.746 [2024-12-07 05:46:14.975656] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.746 [2024-12-07 05:46:14.975663] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.746 [2024-12-07 05:46:14.977913] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.007 [2024-12-07 05:46:14.987220] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.007 [2024-12-07 05:46:14.987636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.007 [2024-12-07 05:46:14.987928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.007 [2024-12-07 05:46:14.987939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.007 [2024-12-07 05:46:14.987948] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.007 [2024-12-07 05:46:14.988083] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.007 [2024-12-07 05:46:14.988191] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.007 [2024-12-07 05:46:14.988200] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.007 [2024-12-07 05:46:14.988207] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.007 [2024-12-07 05:46:14.990456] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.007 [2024-12-07 05:46:14.999938] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.007 [2024-12-07 05:46:15.000446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.008 [2024-12-07 05:46:15.000612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.008 [2024-12-07 05:46:15.000622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.008 [2024-12-07 05:46:15.000630] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.008 [2024-12-07 05:46:15.000791] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.008 [2024-12-07 05:46:15.000916] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.008 [2024-12-07 05:46:15.000924] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.008 [2024-12-07 05:46:15.000931] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.008 [2024-12-07 05:46:15.003125] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.008 [2024-12-07 05:46:15.012488] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.008 [2024-12-07 05:46:15.012942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.008 [2024-12-07 05:46:15.013321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.008 [2024-12-07 05:46:15.013332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.008 [2024-12-07 05:46:15.013340] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.008 [2024-12-07 05:46:15.013502] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.008 [2024-12-07 05:46:15.013608] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.008 [2024-12-07 05:46:15.013616] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.008 [2024-12-07 05:46:15.013623] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.008 [2024-12-07 05:46:15.016024] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.008 [2024-12-07 05:46:15.024763] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.008 [2024-12-07 05:46:15.025146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.008 [2024-12-07 05:46:15.025497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.008 [2024-12-07 05:46:15.025507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.008 [2024-12-07 05:46:15.025515] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.008 [2024-12-07 05:46:15.025694] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.008 [2024-12-07 05:46:15.025842] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.008 [2024-12-07 05:46:15.025850] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.008 [2024-12-07 05:46:15.025857] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.008 [2024-12-07 05:46:15.028219] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.008 [2024-12-07 05:46:15.037252] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.008 [2024-12-07 05:46:15.037792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.008 [2024-12-07 05:46:15.038169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.008 [2024-12-07 05:46:15.038184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.008 [2024-12-07 05:46:15.038194] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.008 [2024-12-07 05:46:15.038338] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.008 [2024-12-07 05:46:15.038465] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.008 [2024-12-07 05:46:15.038474] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.008 [2024-12-07 05:46:15.038482] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.008 [2024-12-07 05:46:15.040672] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.008 [2024-12-07 05:46:15.049838] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.008 [2024-12-07 05:46:15.050202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.008 [2024-12-07 05:46:15.050506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.008 [2024-12-07 05:46:15.050517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.008 [2024-12-07 05:46:15.050526] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.008 [2024-12-07 05:46:15.050705] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.008 [2024-12-07 05:46:15.050830] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.008 [2024-12-07 05:46:15.050838] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.008 [2024-12-07 05:46:15.050846] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.008 [2024-12-07 05:46:15.053230] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.008 [2024-12-07 05:46:15.062296] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.008 [2024-12-07 05:46:15.062846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.008 [2024-12-07 05:46:15.063232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.008 [2024-12-07 05:46:15.063248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.008 [2024-12-07 05:46:15.063257] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.008 [2024-12-07 05:46:15.063420] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.008 [2024-12-07 05:46:15.063566] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.008 [2024-12-07 05:46:15.063586] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.008 [2024-12-07 05:46:15.063594] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.008 [2024-12-07 05:46:15.065872] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.008 [2024-12-07 05:46:15.074846] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.008 [2024-12-07 05:46:15.075345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.008 [2024-12-07 05:46:15.075666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.008 [2024-12-07 05:46:15.075679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.008 [2024-12-07 05:46:15.075688] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.008 [2024-12-07 05:46:15.075887] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.008 [2024-12-07 05:46:15.076021] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.008 [2024-12-07 05:46:15.076030] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.008 [2024-12-07 05:46:15.076038] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.008 [2024-12-07 05:46:15.078281] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.008 [2024-12-07 05:46:15.087296] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.008 [2024-12-07 05:46:15.087792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.008 [2024-12-07 05:46:15.088114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.008 [2024-12-07 05:46:15.088126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.008 [2024-12-07 05:46:15.088134] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.008 [2024-12-07 05:46:15.088258] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.008 [2024-12-07 05:46:15.088403] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.008 [2024-12-07 05:46:15.088411] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.008 [2024-12-07 05:46:15.088418] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.008 [2024-12-07 05:46:15.090631] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.008 [2024-12-07 05:46:15.099768] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.008 [2024-12-07 05:46:15.100186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.008 [2024-12-07 05:46:15.100489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.008 [2024-12-07 05:46:15.100499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.008 [2024-12-07 05:46:15.100507] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.008 [2024-12-07 05:46:15.100651] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.008 [2024-12-07 05:46:15.100868] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.008 [2024-12-07 05:46:15.100876] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.008 [2024-12-07 05:46:15.100888] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.008 [2024-12-07 05:46:15.103270] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.008 [2024-12-07 05:46:15.112067] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.008 [2024-12-07 05:46:15.112555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.008 [2024-12-07 05:46:15.112871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.008 [2024-12-07 05:46:15.112884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.008 [2024-12-07 05:46:15.112894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.008 [2024-12-07 05:46:15.113001] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.008 [2024-12-07 05:46:15.113154] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.008 [2024-12-07 05:46:15.113164] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.008 [2024-12-07 05:46:15.113171] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.008 [2024-12-07 05:46:15.115213] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.008 [2024-12-07 05:46:15.124560] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.008 [2024-12-07 05:46:15.125064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.008 [2024-12-07 05:46:15.125343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.008 [2024-12-07 05:46:15.125353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.008 [2024-12-07 05:46:15.125361] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.008 [2024-12-07 05:46:15.125523] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.008 [2024-12-07 05:46:15.125630] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.008 [2024-12-07 05:46:15.125637] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.008 [2024-12-07 05:46:15.125644] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.008 [2024-12-07 05:46:15.127842] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.008 [2024-12-07 05:46:15.137037] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.008 [2024-12-07 05:46:15.137518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.008 [2024-12-07 05:46:15.137811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.008 [2024-12-07 05:46:15.137821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.008 [2024-12-07 05:46:15.137828] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.008 [2024-12-07 05:46:15.138008] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.008 [2024-12-07 05:46:15.138159] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.008 [2024-12-07 05:46:15.138167] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.008 [2024-12-07 05:46:15.138174] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.008 [2024-12-07 05:46:15.140467] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.008 [2024-12-07 05:46:15.149506] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.008 [2024-12-07 05:46:15.150047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.008 [2024-12-07 05:46:15.150431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.008 [2024-12-07 05:46:15.150444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.008 [2024-12-07 05:46:15.150454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.008 [2024-12-07 05:46:15.150598] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.008 [2024-12-07 05:46:15.150745] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.008 [2024-12-07 05:46:15.150753] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.008 [2024-12-07 05:46:15.150761] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.008 [2024-12-07 05:46:15.153065] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.008 [2024-12-07 05:46:15.162122] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.008 [2024-12-07 05:46:15.162688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.008 [2024-12-07 05:46:15.163008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.008 [2024-12-07 05:46:15.163029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.008 [2024-12-07 05:46:15.163039] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.008 [2024-12-07 05:46:15.163182] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.008 [2024-12-07 05:46:15.163347] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.008 [2024-12-07 05:46:15.163355] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.008 [2024-12-07 05:46:15.163362] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.008 [2024-12-07 05:46:15.165621] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.008 [2024-12-07 05:46:15.174634] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.008 [2024-12-07 05:46:15.175122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.008 [2024-12-07 05:46:15.175432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.008 [2024-12-07 05:46:15.175443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.008 [2024-12-07 05:46:15.175451] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.008 [2024-12-07 05:46:15.175539] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.008 [2024-12-07 05:46:15.175683] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.008 [2024-12-07 05:46:15.175691] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.008 [2024-12-07 05:46:15.175698] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.008 [2024-12-07 05:46:15.178026] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.008 [2024-12-07 05:46:15.187134] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.008 [2024-12-07 05:46:15.187569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.008 [2024-12-07 05:46:15.187891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.008 [2024-12-07 05:46:15.187905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.008 [2024-12-07 05:46:15.187914] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.008 [2024-12-07 05:46:15.188086] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.008 [2024-12-07 05:46:15.188198] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.008 [2024-12-07 05:46:15.188206] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.008 [2024-12-07 05:46:15.188213] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.008 [2024-12-07 05:46:15.190305] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.008 [2024-12-07 05:46:15.199696] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.009 [2024-12-07 05:46:15.200277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.009 [2024-12-07 05:46:15.200600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.009 [2024-12-07 05:46:15.200613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.009 [2024-12-07 05:46:15.200623] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.009 [2024-12-07 05:46:15.200767] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.009 [2024-12-07 05:46:15.200894] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.009 [2024-12-07 05:46:15.200902] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.009 [2024-12-07 05:46:15.200910] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.009 [2024-12-07 05:46:15.203248] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.009 [2024-12-07 05:46:15.212305] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.009 [2024-12-07 05:46:15.212949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.009 [2024-12-07 05:46:15.213277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.009 [2024-12-07 05:46:15.213292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.009 [2024-12-07 05:46:15.213302] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.009 [2024-12-07 05:46:15.213464] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.009 [2024-12-07 05:46:15.213629] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.009 [2024-12-07 05:46:15.213638] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.009 [2024-12-07 05:46:15.213645] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.009 [2024-12-07 05:46:15.215957] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.009 [2024-12-07 05:46:15.224794] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.009 [2024-12-07 05:46:15.225265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.009 [2024-12-07 05:46:15.225589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.009 [2024-12-07 05:46:15.225600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.009 [2024-12-07 05:46:15.225608] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.009 [2024-12-07 05:46:15.225790] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.009 [2024-12-07 05:46:15.225915] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.009 [2024-12-07 05:46:15.225923] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.009 [2024-12-07 05:46:15.225930] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.009 [2024-12-07 05:46:15.228145] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.009 [2024-12-07 05:46:15.237310] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.009 [2024-12-07 05:46:15.237807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.009 [2024-12-07 05:46:15.238105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.009 [2024-12-07 05:46:15.238116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.009 [2024-12-07 05:46:15.238124] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.009 [2024-12-07 05:46:15.238249] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.009 [2024-12-07 05:46:15.238373] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.009 [2024-12-07 05:46:15.238382] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.009 [2024-12-07 05:46:15.238389] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.009 [2024-12-07 05:46:15.240490] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.286 [2024-12-07 05:46:15.249776] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.286 [2024-12-07 05:46:15.250246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-12-07 05:46:15.250556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-12-07 05:46:15.250567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.286 [2024-12-07 05:46:15.250574] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.286 [2024-12-07 05:46:15.250755] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.286 [2024-12-07 05:46:15.250843] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.286 [2024-12-07 05:46:15.250850] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.286 [2024-12-07 05:46:15.250858] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.286 [2024-12-07 05:46:15.253057] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.286 [2024-12-07 05:46:15.262312] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.286 [2024-12-07 05:46:15.262795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-12-07 05:46:15.263705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-12-07 05:46:15.263731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.286 [2024-12-07 05:46:15.263740] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.286 [2024-12-07 05:46:15.263852] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.286 [2024-12-07 05:46:15.263978] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.286 [2024-12-07 05:46:15.263986] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.286 [2024-12-07 05:46:15.263994] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.286 [2024-12-07 05:46:15.266273] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.286 [2024-12-07 05:46:15.274679] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.286 [2024-12-07 05:46:15.275106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-12-07 05:46:15.275459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-12-07 05:46:15.275469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.286 [2024-12-07 05:46:15.275476] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.286 [2024-12-07 05:46:15.275583] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.286 [2024-12-07 05:46:15.275708] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.286 [2024-12-07 05:46:15.275716] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.286 [2024-12-07 05:46:15.275723] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.286 [2024-12-07 05:46:15.277973] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.286 [2024-12-07 05:46:15.287273] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.286 [2024-12-07 05:46:15.287739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-12-07 05:46:15.288032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-12-07 05:46:15.288051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.286 [2024-12-07 05:46:15.288059] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.286 [2024-12-07 05:46:15.288166] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.286 [2024-12-07 05:46:15.288309] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.286 [2024-12-07 05:46:15.288318] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.286 [2024-12-07 05:46:15.288325] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.286 [2024-12-07 05:46:15.290520] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.286 [2024-12-07 05:46:15.300075] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.286 [2024-12-07 05:46:15.300569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-12-07 05:46:15.300881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-12-07 05:46:15.300892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.286 [2024-12-07 05:46:15.300903] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.286 [2024-12-07 05:46:15.301034] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.286 [2024-12-07 05:46:15.301177] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.286 [2024-12-07 05:46:15.301186] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.286 [2024-12-07 05:46:15.301192] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.286 [2024-12-07 05:46:15.303613] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.286 [2024-12-07 05:46:15.312338] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.286 [2024-12-07 05:46:15.312828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-12-07 05:46:15.313151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-12-07 05:46:15.313163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.286 [2024-12-07 05:46:15.313171] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.286 [2024-12-07 05:46:15.313332] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.286 [2024-12-07 05:46:15.313457] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.286 [2024-12-07 05:46:15.313466] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.286 [2024-12-07 05:46:15.313473] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.286 [2024-12-07 05:46:15.315666] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.286 [2024-12-07 05:46:15.324850] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.286 [2024-12-07 05:46:15.325298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-12-07 05:46:15.325614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-12-07 05:46:15.325624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.286 [2024-12-07 05:46:15.325632] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.286 [2024-12-07 05:46:15.325793] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.287 [2024-12-07 05:46:15.325935] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.287 [2024-12-07 05:46:15.325943] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.287 [2024-12-07 05:46:15.325951] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.287 [2024-12-07 05:46:15.328207] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.287 [2024-12-07 05:46:15.337329] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.287 [2024-12-07 05:46:15.337818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-12-07 05:46:15.338117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-12-07 05:46:15.338127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.287 [2024-12-07 05:46:15.338135] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.287 [2024-12-07 05:46:15.338282] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.287 [2024-12-07 05:46:15.338407] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.287 [2024-12-07 05:46:15.338415] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.287 [2024-12-07 05:46:15.338422] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.287 [2024-12-07 05:46:15.340582] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.287 [2024-12-07 05:46:15.349970] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.287 [2024-12-07 05:46:15.350512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-12-07 05:46:15.350836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-12-07 05:46:15.350850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.287 [2024-12-07 05:46:15.350860] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.287 [2024-12-07 05:46:15.351029] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.287 [2024-12-07 05:46:15.351176] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.287 [2024-12-07 05:46:15.351184] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.287 [2024-12-07 05:46:15.351192] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.287 [2024-12-07 05:46:15.353356] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.287 [2024-12-07 05:46:15.362407] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.287 [2024-12-07 05:46:15.362985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-12-07 05:46:15.363339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-12-07 05:46:15.363353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.287 [2024-12-07 05:46:15.363362] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.287 [2024-12-07 05:46:15.363526] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.287 [2024-12-07 05:46:15.363673] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.287 [2024-12-07 05:46:15.363681] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.287 [2024-12-07 05:46:15.363689] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.287 [2024-12-07 05:46:15.365999] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.287 [2024-12-07 05:46:15.375120] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.287 [2024-12-07 05:46:15.375458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-12-07 05:46:15.375760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-12-07 05:46:15.375770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.287 [2024-12-07 05:46:15.375778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.287 [2024-12-07 05:46:15.375885] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.287 [2024-12-07 05:46:15.376076] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.287 [2024-12-07 05:46:15.376085] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.287 [2024-12-07 05:46:15.376092] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.287 [2024-12-07 05:46:15.378287] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.287 [2024-12-07 05:46:15.387809] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.287 [2024-12-07 05:46:15.388252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-12-07 05:46:15.388558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-12-07 05:46:15.388568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.287 [2024-12-07 05:46:15.388575] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.287 [2024-12-07 05:46:15.388754] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.287 [2024-12-07 05:46:15.388897] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.287 [2024-12-07 05:46:15.388905] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.287 [2024-12-07 05:46:15.388912] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.287 [2024-12-07 05:46:15.391042] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.287 [2024-12-07 05:46:15.400288] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.287 [2024-12-07 05:46:15.400832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-12-07 05:46:15.401155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-12-07 05:46:15.401171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.287 [2024-12-07 05:46:15.401180] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.287 [2024-12-07 05:46:15.401288] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.287 [2024-12-07 05:46:15.401453] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.287 [2024-12-07 05:46:15.401462] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.287 [2024-12-07 05:46:15.401470] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.287 [2024-12-07 05:46:15.403672] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.287 [2024-12-07 05:46:15.412830] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.287 [2024-12-07 05:46:15.413289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-12-07 05:46:15.413590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-12-07 05:46:15.413599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.287 [2024-12-07 05:46:15.413607] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.287 [2024-12-07 05:46:15.413733] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.287 [2024-12-07 05:46:15.413876] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.287 [2024-12-07 05:46:15.413888] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.287 [2024-12-07 05:46:15.413896] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.287 [2024-12-07 05:46:15.416062] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.287 [2024-12-07 05:46:15.425318] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.287 [2024-12-07 05:46:15.425803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-12-07 05:46:15.426123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-12-07 05:46:15.426135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.287 [2024-12-07 05:46:15.426143] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.287 [2024-12-07 05:46:15.426286] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.287 [2024-12-07 05:46:15.426447] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.287 [2024-12-07 05:46:15.426455] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.287 [2024-12-07 05:46:15.426462] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.287 [2024-12-07 05:46:15.428583] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.287 [2024-12-07 05:46:15.437783] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.287 [2024-12-07 05:46:15.438299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-12-07 05:46:15.438547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-12-07 05:46:15.438562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.287 [2024-12-07 05:46:15.438572] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.287 [2024-12-07 05:46:15.438773] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.287 [2024-12-07 05:46:15.438957] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.287 [2024-12-07 05:46:15.438965] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.287 [2024-12-07 05:46:15.438973] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.287 [2024-12-07 05:46:15.441106] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.287 [2024-12-07 05:46:15.450215] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.288 [2024-12-07 05:46:15.450718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-12-07 05:46:15.451021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-12-07 05:46:15.451032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.288 [2024-12-07 05:46:15.451041] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.288 [2024-12-07 05:46:15.451184] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.288 [2024-12-07 05:46:15.451328] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.288 [2024-12-07 05:46:15.451336] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.288 [2024-12-07 05:46:15.451347] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.288 [2024-12-07 05:46:15.453546] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.288 [2024-12-07 05:46:15.462502] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.288 [2024-12-07 05:46:15.462950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-12-07 05:46:15.463410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-12-07 05:46:15.463421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.288 [2024-12-07 05:46:15.463429] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.288 [2024-12-07 05:46:15.463573] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.288 [2024-12-07 05:46:15.463734] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.288 [2024-12-07 05:46:15.463742] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.288 [2024-12-07 05:46:15.463749] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.288 [2024-12-07 05:46:15.465910] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.288 [2024-12-07 05:46:15.475048] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.288 [2024-12-07 05:46:15.475494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-12-07 05:46:15.475810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-12-07 05:46:15.475821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.288 [2024-12-07 05:46:15.475828] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.288 [2024-12-07 05:46:15.475972] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.288 [2024-12-07 05:46:15.476083] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.288 [2024-12-07 05:46:15.476091] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.288 [2024-12-07 05:46:15.476098] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.288 [2024-12-07 05:46:15.478419] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.288 [2024-12-07 05:46:15.487554] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.288 [2024-12-07 05:46:15.488059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-12-07 05:46:15.488295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-12-07 05:46:15.488304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.288 [2024-12-07 05:46:15.488312] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.288 [2024-12-07 05:46:15.488456] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.288 [2024-12-07 05:46:15.488617] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.288 [2024-12-07 05:46:15.488625] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.288 [2024-12-07 05:46:15.488632] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.288 [2024-12-07 05:46:15.491055] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.288 [2024-12-07 05:46:15.500167] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.288 [2024-12-07 05:46:15.500573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-12-07 05:46:15.500766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-12-07 05:46:15.500776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.288 [2024-12-07 05:46:15.500783] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.288 [2024-12-07 05:46:15.500963] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.288 [2024-12-07 05:46:15.501093] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.288 [2024-12-07 05:46:15.501102] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.288 [2024-12-07 05:46:15.501110] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.288 [2024-12-07 05:46:15.503321] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.288 [2024-12-07 05:46:15.512591] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.288 [2024-12-07 05:46:15.513006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-12-07 05:46:15.513250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-12-07 05:46:15.513260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.288 [2024-12-07 05:46:15.513268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.288 [2024-12-07 05:46:15.513413] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.288 [2024-12-07 05:46:15.513575] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.288 [2024-12-07 05:46:15.513582] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.288 [2024-12-07 05:46:15.513590] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.288 [2024-12-07 05:46:15.515840] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.550 [2024-12-07 05:46:15.525265] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.550 [2024-12-07 05:46:15.525677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.550 [2024-12-07 05:46:15.525900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.550 [2024-12-07 05:46:15.525910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.550 [2024-12-07 05:46:15.525918] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.550 [2024-12-07 05:46:15.526067] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.550 [2024-12-07 05:46:15.526210] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.550 [2024-12-07 05:46:15.526218] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.550 [2024-12-07 05:46:15.526225] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.550 [2024-12-07 05:46:15.528455] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.550 [2024-12-07 05:46:15.537620] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.550 [2024-12-07 05:46:15.538121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.550 [2024-12-07 05:46:15.538505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.550 [2024-12-07 05:46:15.538518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.550 [2024-12-07 05:46:15.538527] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.550 [2024-12-07 05:46:15.538708] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.550 [2024-12-07 05:46:15.538893] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.550 [2024-12-07 05:46:15.538901] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.550 [2024-12-07 05:46:15.538909] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.550 [2024-12-07 05:46:15.541190] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.550 [2024-12-07 05:46:15.550223] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.550 [2024-12-07 05:46:15.550518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.550 [2024-12-07 05:46:15.550862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.550 [2024-12-07 05:46:15.550879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.550 [2024-12-07 05:46:15.550887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.550 [2024-12-07 05:46:15.551075] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.550 [2024-12-07 05:46:15.551182] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.550 [2024-12-07 05:46:15.551191] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.550 [2024-12-07 05:46:15.551199] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.550 [2024-12-07 05:46:15.553489] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.550 [2024-12-07 05:46:15.562767] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.551 [2024-12-07 05:46:15.563243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.551 [2024-12-07 05:46:15.563549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.551 [2024-12-07 05:46:15.563560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.551 [2024-12-07 05:46:15.563568] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.551 [2024-12-07 05:46:15.563711] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.551 [2024-12-07 05:46:15.563835] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.551 [2024-12-07 05:46:15.563844] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.551 [2024-12-07 05:46:15.563851] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.551 [2024-12-07 05:46:15.566021] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.551 [2024-12-07 05:46:15.575295] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.551 [2024-12-07 05:46:15.575701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.551 [2024-12-07 05:46:15.575997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.551 [2024-12-07 05:46:15.576007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.551 [2024-12-07 05:46:15.576021] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.551 [2024-12-07 05:46:15.576183] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.551 [2024-12-07 05:46:15.576324] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.551 [2024-12-07 05:46:15.576332] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.551 [2024-12-07 05:46:15.576340] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.551 [2024-12-07 05:46:15.578647] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.551 [2024-12-07 05:46:15.587610] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.551 [2024-12-07 05:46:15.588108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.551 [2024-12-07 05:46:15.588425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.551 [2024-12-07 05:46:15.588435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.551 [2024-12-07 05:46:15.588443] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.551 [2024-12-07 05:46:15.588623] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.551 [2024-12-07 05:46:15.588748] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.551 [2024-12-07 05:46:15.588755] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.551 [2024-12-07 05:46:15.588762] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.551 [2024-12-07 05:46:15.590924] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.551 [2024-12-07 05:46:15.600137] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.551 [2024-12-07 05:46:15.600597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.551 [2024-12-07 05:46:15.600895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.551 [2024-12-07 05:46:15.600906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.551 [2024-12-07 05:46:15.600914] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.551 [2024-12-07 05:46:15.601046] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.551 [2024-12-07 05:46:15.601208] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.551 [2024-12-07 05:46:15.601217] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.551 [2024-12-07 05:46:15.601224] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.551 [2024-12-07 05:46:15.603512] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.551 [2024-12-07 05:46:15.613071] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.551 [2024-12-07 05:46:15.613642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.551 [2024-12-07 05:46:15.613962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.551 [2024-12-07 05:46:15.613980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.551 [2024-12-07 05:46:15.613989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.551 [2024-12-07 05:46:15.614124] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.551 [2024-12-07 05:46:15.614271] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.551 [2024-12-07 05:46:15.614280] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.551 [2024-12-07 05:46:15.614288] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.551 [2024-12-07 05:46:15.616505] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.551 [2024-12-07 05:46:15.625765] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.551 [2024-12-07 05:46:15.626215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.551 [2024-12-07 05:46:15.626395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.551 [2024-12-07 05:46:15.626405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.551 [2024-12-07 05:46:15.626413] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.551 [2024-12-07 05:46:15.626575] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.551 [2024-12-07 05:46:15.626718] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.551 [2024-12-07 05:46:15.626726] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.551 [2024-12-07 05:46:15.626733] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.551 [2024-12-07 05:46:15.629119] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.551 [2024-12-07 05:46:15.638337] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.551 [2024-12-07 05:46:15.638831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.551 [2024-12-07 05:46:15.639148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.551 [2024-12-07 05:46:15.639160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.551 [2024-12-07 05:46:15.639167] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.551 [2024-12-07 05:46:15.639329] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.551 [2024-12-07 05:46:15.639509] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.551 [2024-12-07 05:46:15.639517] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.551 [2024-12-07 05:46:15.639524] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.551 [2024-12-07 05:46:15.641649] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.551 [2024-12-07 05:46:15.650967] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.551 [2024-12-07 05:46:15.651428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.551 [2024-12-07 05:46:15.651741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.551 [2024-12-07 05:46:15.651752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.551 [2024-12-07 05:46:15.651763] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.551 [2024-12-07 05:46:15.651888] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.551 [2024-12-07 05:46:15.652055] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.551 [2024-12-07 05:46:15.652064] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.551 [2024-12-07 05:46:15.652071] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.551 [2024-12-07 05:46:15.654250] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.551 [2024-12-07 05:46:15.663529] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.551 [2024-12-07 05:46:15.664029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.551 [2024-12-07 05:46:15.664330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.551 [2024-12-07 05:46:15.664340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.551 [2024-12-07 05:46:15.664348] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.551 [2024-12-07 05:46:15.664510] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.552 [2024-12-07 05:46:15.664672] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.552 [2024-12-07 05:46:15.664680] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.552 [2024-12-07 05:46:15.664687] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.552 [2024-12-07 05:46:15.666883] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.552 [2024-12-07 05:46:15.676218] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.552 [2024-12-07 05:46:15.676791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.552 [2024-12-07 05:46:15.677117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.552 [2024-12-07 05:46:15.677132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.552 [2024-12-07 05:46:15.677141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.552 [2024-12-07 05:46:15.677342] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.552 [2024-12-07 05:46:15.677488] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.552 [2024-12-07 05:46:15.677497] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.552 [2024-12-07 05:46:15.677504] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.552 [2024-12-07 05:46:15.679779] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.552 [2024-12-07 05:46:15.688738] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.552 [2024-12-07 05:46:15.689195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.552 [2024-12-07 05:46:15.689511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.552 [2024-12-07 05:46:15.689522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.552 [2024-12-07 05:46:15.689530] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.552 [2024-12-07 05:46:15.689735] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.552 [2024-12-07 05:46:15.689843] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.552 [2024-12-07 05:46:15.689851] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.552 [2024-12-07 05:46:15.689858] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.552 [2024-12-07 05:46:15.692443] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.552 [2024-12-07 05:46:15.701112] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.552 [2024-12-07 05:46:15.701555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.552 [2024-12-07 05:46:15.701875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.552 [2024-12-07 05:46:15.701885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.552 [2024-12-07 05:46:15.701892] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.552 [2024-12-07 05:46:15.702063] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.552 [2024-12-07 05:46:15.702235] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.552 [2024-12-07 05:46:15.702242] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.552 [2024-12-07 05:46:15.702249] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.552 [2024-12-07 05:46:15.704537] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.552 [2024-12-07 05:46:15.713703] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.552 [2024-12-07 05:46:15.714254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.552 [2024-12-07 05:46:15.714576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.552 [2024-12-07 05:46:15.714590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.552 [2024-12-07 05:46:15.714599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.552 [2024-12-07 05:46:15.714779] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.552 [2024-12-07 05:46:15.714907] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.552 [2024-12-07 05:46:15.714915] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.552 [2024-12-07 05:46:15.714923] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.552 [2024-12-07 05:46:15.717298] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.552 [2024-12-07 05:46:15.726209] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.552 [2024-12-07 05:46:15.726776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.552 [2024-12-07 05:46:15.727093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.552 [2024-12-07 05:46:15.727108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.552 [2024-12-07 05:46:15.727118] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.552 [2024-12-07 05:46:15.727280] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.552 [2024-12-07 05:46:15.727450] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.552 [2024-12-07 05:46:15.727459] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.552 [2024-12-07 05:46:15.727467] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.552 [2024-12-07 05:46:15.729632] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.552 [2024-12-07 05:46:15.738727] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.552 [2024-12-07 05:46:15.739183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.552 [2024-12-07 05:46:15.739477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.552 [2024-12-07 05:46:15.739486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.552 [2024-12-07 05:46:15.739495] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.552 [2024-12-07 05:46:15.739675] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.552 [2024-12-07 05:46:15.739818] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.552 [2024-12-07 05:46:15.739826] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.552 [2024-12-07 05:46:15.739833] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.552 [2024-12-07 05:46:15.741940] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.552 [2024-12-07 05:46:15.751229] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.552 [2024-12-07 05:46:15.751723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.552 [2024-12-07 05:46:15.752046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.552 [2024-12-07 05:46:15.752060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.552 [2024-12-07 05:46:15.752070] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.552 [2024-12-07 05:46:15.752214] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.552 [2024-12-07 05:46:15.752397] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.552 [2024-12-07 05:46:15.752405] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.552 [2024-12-07 05:46:15.752412] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.552 [2024-12-07 05:46:15.754632] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.552 [2024-12-07 05:46:15.763829] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.552 [2024-12-07 05:46:15.764393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.552 [2024-12-07 05:46:15.764721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.552 [2024-12-07 05:46:15.764735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.552 [2024-12-07 05:46:15.764745] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.552 [2024-12-07 05:46:15.764927] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.552 [2024-12-07 05:46:15.765063] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.552 [2024-12-07 05:46:15.765077] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.552 [2024-12-07 05:46:15.765085] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.552 [2024-12-07 05:46:15.767505] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.552 [2024-12-07 05:46:15.776437] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.553 [2024-12-07 05:46:15.777038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.553 [2024-12-07 05:46:15.777365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.553 [2024-12-07 05:46:15.777378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.553 [2024-12-07 05:46:15.777388] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.553 [2024-12-07 05:46:15.777513] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.553 [2024-12-07 05:46:15.777678] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.553 [2024-12-07 05:46:15.777686] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.553 [2024-12-07 05:46:15.777694] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.553 [2024-12-07 05:46:15.779824] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.814 [2024-12-07 05:46:15.788925] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.814 [2024-12-07 05:46:15.789340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.814 [2024-12-07 05:46:15.789681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.814 [2024-12-07 05:46:15.789691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.814 [2024-12-07 05:46:15.789699] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.814 [2024-12-07 05:46:15.789842] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.814 [2024-12-07 05:46:15.790026] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.814 [2024-12-07 05:46:15.790034] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.814 [2024-12-07 05:46:15.790042] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.814 [2024-12-07 05:46:15.792273] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.814 [2024-12-07 05:46:15.801271] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.814 [2024-12-07 05:46:15.801716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.814 [2024-12-07 05:46:15.802032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.814 [2024-12-07 05:46:15.802043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.814 [2024-12-07 05:46:15.802051] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.814 [2024-12-07 05:46:15.802158] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.814 [2024-12-07 05:46:15.802282] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.814 [2024-12-07 05:46:15.802289] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.814 [2024-12-07 05:46:15.802305] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.814 [2024-12-07 05:46:15.804747] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.814 [2024-12-07 05:46:15.813685] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.814 [2024-12-07 05:46:15.814292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.814 [2024-12-07 05:46:15.814613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.814 [2024-12-07 05:46:15.814626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.814 [2024-12-07 05:46:15.814636] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.814 [2024-12-07 05:46:15.814780] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.814 [2024-12-07 05:46:15.814944] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.814 [2024-12-07 05:46:15.814952] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.814 [2024-12-07 05:46:15.814960] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.814 [2024-12-07 05:46:15.817228] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.814 [2024-12-07 05:46:15.826222] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.814 [2024-12-07 05:46:15.826824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.814 [2024-12-07 05:46:15.827149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.814 [2024-12-07 05:46:15.827163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.814 [2024-12-07 05:46:15.827173] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.814 [2024-12-07 05:46:15.827335] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.814 [2024-12-07 05:46:15.827426] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.814 [2024-12-07 05:46:15.827433] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.814 [2024-12-07 05:46:15.827441] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.814 [2024-12-07 05:46:15.829575] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.814 [2024-12-07 05:46:15.838546] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.814 [2024-12-07 05:46:15.838998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.814 [2024-12-07 05:46:15.839369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.814 [2024-12-07 05:46:15.839379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.814 [2024-12-07 05:46:15.839386] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.814 [2024-12-07 05:46:15.839512] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.814 [2024-12-07 05:46:15.839674] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.814 [2024-12-07 05:46:15.839681] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.814 [2024-12-07 05:46:15.839689] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.814 [2024-12-07 05:46:15.841745] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.814 [2024-12-07 05:46:15.851091] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.814 [2024-12-07 05:46:15.851662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.814 [2024-12-07 05:46:15.851982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.814 [2024-12-07 05:46:15.851995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.814 [2024-12-07 05:46:15.852005] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.814 [2024-12-07 05:46:15.852176] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.814 [2024-12-07 05:46:15.852305] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.814 [2024-12-07 05:46:15.852313] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.814 [2024-12-07 05:46:15.852321] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.814 [2024-12-07 05:46:15.854614] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.814 [2024-12-07 05:46:15.863669] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.814 [2024-12-07 05:46:15.864202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.814 [2024-12-07 05:46:15.864524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.814 [2024-12-07 05:46:15.864537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.814 [2024-12-07 05:46:15.864547] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.814 [2024-12-07 05:46:15.864709] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.814 [2024-12-07 05:46:15.864800] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.814 [2024-12-07 05:46:15.864807] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.814 [2024-12-07 05:46:15.864815] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.814 [2024-12-07 05:46:15.867002] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.814 [2024-12-07 05:46:15.876111] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.814 [2024-12-07 05:46:15.876672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.814 [2024-12-07 05:46:15.876885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.814 [2024-12-07 05:46:15.876898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.815 [2024-12-07 05:46:15.876908] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.815 [2024-12-07 05:46:15.877078] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.815 [2024-12-07 05:46:15.877226] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.815 [2024-12-07 05:46:15.877234] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.815 [2024-12-07 05:46:15.877242] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.815 [2024-12-07 05:46:15.879318] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.815 [2024-12-07 05:46:15.888369] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.815 [2024-12-07 05:46:15.888957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.815 [2024-12-07 05:46:15.889238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.815 [2024-12-07 05:46:15.889252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.815 [2024-12-07 05:46:15.889261] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.815 [2024-12-07 05:46:15.889442] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.815 [2024-12-07 05:46:15.889570] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.815 [2024-12-07 05:46:15.889578] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.815 [2024-12-07 05:46:15.889587] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.815 [2024-12-07 05:46:15.891994] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.815 [2024-12-07 05:46:15.900747] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.815 [2024-12-07 05:46:15.901307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.815 [2024-12-07 05:46:15.901636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.815 [2024-12-07 05:46:15.901649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.815 [2024-12-07 05:46:15.901658] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.815 [2024-12-07 05:46:15.901839] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.815 [2024-12-07 05:46:15.902004] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.815 [2024-12-07 05:46:15.902021] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.815 [2024-12-07 05:46:15.902029] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.815 [2024-12-07 05:46:15.904393] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.815 [2024-12-07 05:46:15.913431] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.815 [2024-12-07 05:46:15.914044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.815 [2024-12-07 05:46:15.914364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.815 [2024-12-07 05:46:15.914377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.815 [2024-12-07 05:46:15.914387] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.815 [2024-12-07 05:46:15.914494] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.815 [2024-12-07 05:46:15.914659] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.815 [2024-12-07 05:46:15.914667] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.815 [2024-12-07 05:46:15.914675] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.815 [2024-12-07 05:46:15.916933] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.815 [2024-12-07 05:46:15.925931] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.815 [2024-12-07 05:46:15.926526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.815 [2024-12-07 05:46:15.926848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.815 [2024-12-07 05:46:15.926861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.815 [2024-12-07 05:46:15.926871] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.815 [2024-12-07 05:46:15.927062] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.815 [2024-12-07 05:46:15.927171] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.815 [2024-12-07 05:46:15.927179] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.815 [2024-12-07 05:46:15.927187] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.815 [2024-12-07 05:46:15.929573] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.815 [2024-12-07 05:46:15.938514] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.815 [2024-12-07 05:46:15.939049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.815 [2024-12-07 05:46:15.939376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.815 [2024-12-07 05:46:15.939389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.815 [2024-12-07 05:46:15.939399] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.815 [2024-12-07 05:46:15.939524] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.815 [2024-12-07 05:46:15.939706] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.815 [2024-12-07 05:46:15.939715] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.815 [2024-12-07 05:46:15.939722] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.815 [2024-12-07 05:46:15.942022] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.815 [2024-12-07 05:46:15.951029] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.815 [2024-12-07 05:46:15.951637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.815 [2024-12-07 05:46:15.951963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.815 [2024-12-07 05:46:15.951976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.815 [2024-12-07 05:46:15.951986] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.815 [2024-12-07 05:46:15.952175] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.815 [2024-12-07 05:46:15.952340] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.815 [2024-12-07 05:46:15.952349] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.815 [2024-12-07 05:46:15.952356] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.815 [2024-12-07 05:46:15.954522] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.815 [2024-12-07 05:46:15.963496] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.815 [2024-12-07 05:46:15.963934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.815 [2024-12-07 05:46:15.964311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.815 [2024-12-07 05:46:15.964327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.815 [2024-12-07 05:46:15.964336] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.815 [2024-12-07 05:46:15.964480] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.815 [2024-12-07 05:46:15.964641] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.815 [2024-12-07 05:46:15.964649] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.815 [2024-12-07 05:46:15.964657] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.815 [2024-12-07 05:46:15.966870] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.815 [2024-12-07 05:46:15.975891] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.815 [2024-12-07 05:46:15.976445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.815 [2024-12-07 05:46:15.976768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.815 [2024-12-07 05:46:15.976781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.815 [2024-12-07 05:46:15.976791] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.815 [2024-12-07 05:46:15.976953] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.815 [2024-12-07 05:46:15.977125] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.815 [2024-12-07 05:46:15.977135] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.815 [2024-12-07 05:46:15.977142] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.815 [2024-12-07 05:46:15.979385] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.815 [2024-12-07 05:46:15.988454] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.815 [2024-12-07 05:46:15.988984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.815 [2024-12-07 05:46:15.989311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.815 [2024-12-07 05:46:15.989326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.815 [2024-12-07 05:46:15.989335] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.815 [2024-12-07 05:46:15.989480] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.815 [2024-12-07 05:46:15.989609] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.815 [2024-12-07 05:46:15.989617] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.815 [2024-12-07 05:46:15.989624] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.815 [2024-12-07 05:46:15.991959] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.815 [2024-12-07 05:46:16.000804] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.815 [2024-12-07 05:46:16.001280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.815 [2024-12-07 05:46:16.001596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.815 [2024-12-07 05:46:16.001606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.816 [2024-12-07 05:46:16.001619] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.816 [2024-12-07 05:46:16.001745] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.816 [2024-12-07 05:46:16.001870] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.816 [2024-12-07 05:46:16.001878] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.816 [2024-12-07 05:46:16.001885] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.816 [2024-12-07 05:46:16.004161] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.816 [2024-12-07 05:46:16.013504] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.816 [2024-12-07 05:46:16.013935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.816 [2024-12-07 05:46:16.014252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.816 [2024-12-07 05:46:16.014264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.816 [2024-12-07 05:46:16.014272] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.816 [2024-12-07 05:46:16.014378] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.816 [2024-12-07 05:46:16.014504] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.816 [2024-12-07 05:46:16.014512] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.816 [2024-12-07 05:46:16.014519] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.816 [2024-12-07 05:46:16.016969] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.816 [2024-12-07 05:46:16.025656] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.816 [2024-12-07 05:46:16.026220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.816 [2024-12-07 05:46:16.026583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.816 [2024-12-07 05:46:16.026596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.816 [2024-12-07 05:46:16.026606] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.816 [2024-12-07 05:46:16.026768] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.816 [2024-12-07 05:46:16.026897] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.816 [2024-12-07 05:46:16.026905] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.816 [2024-12-07 05:46:16.026913] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.816 [2024-12-07 05:46:16.029249] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.816 [2024-12-07 05:46:16.038214] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.816 [2024-12-07 05:46:16.038656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.816 [2024-12-07 05:46:16.038960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.816 [2024-12-07 05:46:16.038971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:12.816 [2024-12-07 05:46:16.038979] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:12.816 [2024-12-07 05:46:16.039151] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:12.816 [2024-12-07 05:46:16.039240] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.816 [2024-12-07 05:46:16.039247] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.816 [2024-12-07 05:46:16.039255] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.816 [2024-12-07 05:46:16.041634] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.078 [2024-12-07 05:46:16.050700] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.078 [2024-12-07 05:46:16.051278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.078 [2024-12-07 05:46:16.051508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.078 [2024-12-07 05:46:16.051521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.078 [2024-12-07 05:46:16.051530] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.078 [2024-12-07 05:46:16.051674] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.078 [2024-12-07 05:46:16.051839] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.078 [2024-12-07 05:46:16.051848] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.078 [2024-12-07 05:46:16.051856] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.078 [2024-12-07 05:46:16.054064] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.078 [2024-12-07 05:46:16.063138] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.078 [2024-12-07 05:46:16.063637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.078 [2024-12-07 05:46:16.063958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.078 [2024-12-07 05:46:16.063971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.078 [2024-12-07 05:46:16.063981] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.078 [2024-12-07 05:46:16.064190] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.078 [2024-12-07 05:46:16.064338] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.078 [2024-12-07 05:46:16.064346] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.078 [2024-12-07 05:46:16.064354] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.078 [2024-12-07 05:46:16.066572] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.078 [2024-12-07 05:46:16.075660] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.078 [2024-12-07 05:46:16.076248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.078 [2024-12-07 05:46:16.076572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.078 [2024-12-07 05:46:16.076585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.078 [2024-12-07 05:46:16.076595] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.078 [2024-12-07 05:46:16.076776] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.078 [2024-12-07 05:46:16.076946] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.078 [2024-12-07 05:46:16.076954] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.079 [2024-12-07 05:46:16.076962] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.079 [2024-12-07 05:46:16.079169] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.079 [2024-12-07 05:46:16.088218] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.079 [2024-12-07 05:46:16.088788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.079 [2024-12-07 05:46:16.089209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.079 [2024-12-07 05:46:16.089224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.079 [2024-12-07 05:46:16.089234] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.079 [2024-12-07 05:46:16.089396] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.079 [2024-12-07 05:46:16.089562] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.079 [2024-12-07 05:46:16.089570] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.079 [2024-12-07 05:46:16.089577] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.079 [2024-12-07 05:46:16.091613] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.079 [2024-12-07 05:46:16.100801] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.079 [2024-12-07 05:46:16.101403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.079 [2024-12-07 05:46:16.101726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.079 [2024-12-07 05:46:16.101739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.079 [2024-12-07 05:46:16.101749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.079 [2024-12-07 05:46:16.101966] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.079 [2024-12-07 05:46:16.102121] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.079 [2024-12-07 05:46:16.102131] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.079 [2024-12-07 05:46:16.102138] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.079 [2024-12-07 05:46:16.104393] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.079 [2024-12-07 05:46:16.113132] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.079 [2024-12-07 05:46:16.113721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.079 [2024-12-07 05:46:16.114067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.079 [2024-12-07 05:46:16.114082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.079 [2024-12-07 05:46:16.114091] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.079 [2024-12-07 05:46:16.114216] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.079 [2024-12-07 05:46:16.114363] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.079 [2024-12-07 05:46:16.114375] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.079 [2024-12-07 05:46:16.114383] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.079 [2024-12-07 05:46:16.116532] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.079 [2024-12-07 05:46:16.125840] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.079 [2024-12-07 05:46:16.126380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.079 [2024-12-07 05:46:16.126700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.079 [2024-12-07 05:46:16.126713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.079 [2024-12-07 05:46:16.126723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.079 [2024-12-07 05:46:16.126885] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.079 [2024-12-07 05:46:16.127023] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.079 [2024-12-07 05:46:16.127032] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.079 [2024-12-07 05:46:16.127040] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.079 [2024-12-07 05:46:16.129184] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.079 [2024-12-07 05:46:16.138575] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.079 [2024-12-07 05:46:16.139149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.079 [2024-12-07 05:46:16.139473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.079 [2024-12-07 05:46:16.139486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.079 [2024-12-07 05:46:16.139496] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.079 [2024-12-07 05:46:16.139658] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.079 [2024-12-07 05:46:16.139841] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.079 [2024-12-07 05:46:16.139850] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.079 [2024-12-07 05:46:16.139857] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.079 [2024-12-07 05:46:16.142008] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.079 [2024-12-07 05:46:16.151014] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.079 [2024-12-07 05:46:16.151593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.079 [2024-12-07 05:46:16.151918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.079 [2024-12-07 05:46:16.151931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.079 [2024-12-07 05:46:16.151940] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.079 [2024-12-07 05:46:16.152129] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.079 [2024-12-07 05:46:16.152296] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.079 [2024-12-07 05:46:16.152304] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.079 [2024-12-07 05:46:16.152316] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.079 [2024-12-07 05:46:16.154591] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.079 [2024-12-07 05:46:16.163574] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.079 [2024-12-07 05:46:16.164112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.079 [2024-12-07 05:46:16.164479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.079 [2024-12-07 05:46:16.164492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.079 [2024-12-07 05:46:16.164502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.079 [2024-12-07 05:46:16.164684] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.079 [2024-12-07 05:46:16.164812] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.079 [2024-12-07 05:46:16.164820] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.079 [2024-12-07 05:46:16.164828] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.079 [2024-12-07 05:46:16.166994] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.079 [2024-12-07 05:46:16.175917] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.079 [2024-12-07 05:46:16.176493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.079 [2024-12-07 05:46:16.176809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.079 [2024-12-07 05:46:16.176821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.079 [2024-12-07 05:46:16.176831] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.079 [2024-12-07 05:46:16.177040] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.079 [2024-12-07 05:46:16.177132] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.079 [2024-12-07 05:46:16.177140] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.079 [2024-12-07 05:46:16.177148] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.079 [2024-12-07 05:46:16.179427] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.079 [2024-12-07 05:46:16.188456] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.079 [2024-12-07 05:46:16.189000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.079 [2024-12-07 05:46:16.189347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.079 [2024-12-07 05:46:16.189361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.079 [2024-12-07 05:46:16.189370] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.079 [2024-12-07 05:46:16.189514] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.079 [2024-12-07 05:46:16.189660] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.079 [2024-12-07 05:46:16.189668] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.079 [2024-12-07 05:46:16.189677] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.079 [2024-12-07 05:46:16.191961] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.080 [2024-12-07 05:46:16.201001] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.080 [2024-12-07 05:46:16.201619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.080 [2024-12-07 05:46:16.201939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.080 [2024-12-07 05:46:16.201952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.080 [2024-12-07 05:46:16.201962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.080 [2024-12-07 05:46:16.202152] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.080 [2024-12-07 05:46:16.202336] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.080 [2024-12-07 05:46:16.202344] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.080 [2024-12-07 05:46:16.202352] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.080 [2024-12-07 05:46:16.204587] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.080 [2024-12-07 05:46:16.213352] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.080 [2024-12-07 05:46:16.214020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.080 [2024-12-07 05:46:16.214403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.080 [2024-12-07 05:46:16.214417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.080 [2024-12-07 05:46:16.214427] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.080 [2024-12-07 05:46:16.214571] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.080 [2024-12-07 05:46:16.214718] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.080 [2024-12-07 05:46:16.214726] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.080 [2024-12-07 05:46:16.214734] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.080 [2024-12-07 05:46:16.216957] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.080 [2024-12-07 05:46:16.225611] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.080 [2024-12-07 05:46:16.226229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.080 [2024-12-07 05:46:16.226470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.080 [2024-12-07 05:46:16.226483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.080 [2024-12-07 05:46:16.226493] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.080 [2024-12-07 05:46:16.226692] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.080 [2024-12-07 05:46:16.226840] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.080 [2024-12-07 05:46:16.226849] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.080 [2024-12-07 05:46:16.226857] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.080 [2024-12-07 05:46:16.229198] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.080 [2024-12-07 05:46:16.237990] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.080 [2024-12-07 05:46:16.238433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.080 [2024-12-07 05:46:16.238832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.080 [2024-12-07 05:46:16.238842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.080 [2024-12-07 05:46:16.238850] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.080 [2024-12-07 05:46:16.238975] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.080 [2024-12-07 05:46:16.239161] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.080 [2024-12-07 05:46:16.239170] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.080 [2024-12-07 05:46:16.239177] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.080 [2024-12-07 05:46:16.241393] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.080 [2024-12-07 05:46:16.250634] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.080 [2024-12-07 05:46:16.251111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.080 [2024-12-07 05:46:16.251433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.080 [2024-12-07 05:46:16.251447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.080 [2024-12-07 05:46:16.251457] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.080 [2024-12-07 05:46:16.251619] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.080 [2024-12-07 05:46:16.251747] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.080 [2024-12-07 05:46:16.251755] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.080 [2024-12-07 05:46:16.251763] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.080 [2024-12-07 05:46:16.253897] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.080 [2024-12-07 05:46:16.263339] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.080 [2024-12-07 05:46:16.263698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.080 [2024-12-07 05:46:16.263972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.080 [2024-12-07 05:46:16.263982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.080 [2024-12-07 05:46:16.263991] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.080 [2024-12-07 05:46:16.264147] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.080 [2024-12-07 05:46:16.264311] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.080 [2024-12-07 05:46:16.264319] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.080 [2024-12-07 05:46:16.264326] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.080 [2024-12-07 05:46:16.266612] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.080 [2024-12-07 05:46:16.275712] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.080 [2024-12-07 05:46:16.276261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.080 [2024-12-07 05:46:16.276582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.080 [2024-12-07 05:46:16.276595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.080 [2024-12-07 05:46:16.276605] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.080 [2024-12-07 05:46:16.276713] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.080 [2024-12-07 05:46:16.276859] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.080 [2024-12-07 05:46:16.276867] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.080 [2024-12-07 05:46:16.276874] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.080 [2024-12-07 05:46:16.279004] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.080 [2024-12-07 05:46:16.288259] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.080 [2024-12-07 05:46:16.288670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.080 [2024-12-07 05:46:16.288970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.080 [2024-12-07 05:46:16.288980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.080 [2024-12-07 05:46:16.288988] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.080 [2024-12-07 05:46:16.289120] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.080 [2024-12-07 05:46:16.289300] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.080 [2024-12-07 05:46:16.289308] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.080 [2024-12-07 05:46:16.289315] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.080 [2024-12-07 05:46:16.291693] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.080 [2024-12-07 05:46:16.300433] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.080 [2024-12-07 05:46:16.300992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.080 [2024-12-07 05:46:16.301283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.080 [2024-12-07 05:46:16.301294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.080 [2024-12-07 05:46:16.301301] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.080 [2024-12-07 05:46:16.301445] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.080 [2024-12-07 05:46:16.301587] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.080 [2024-12-07 05:46:16.301595] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.080 [2024-12-07 05:46:16.301602] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.080 [2024-12-07 05:46:16.303746] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.080 [2024-12-07 05:46:16.313056] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.080 [2024-12-07 05:46:16.313521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.080 [2024-12-07 05:46:16.313836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.080 [2024-12-07 05:46:16.313850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.081 [2024-12-07 05:46:16.313858] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.081 [2024-12-07 05:46:16.313964] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.081 [2024-12-07 05:46:16.314096] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.081 [2024-12-07 05:46:16.314105] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.081 [2024-12-07 05:46:16.314112] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.342 [2024-12-07 05:46:16.316362] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.342 [2024-12-07 05:46:16.325520] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.342 [2024-12-07 05:46:16.325931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.342 [2024-12-07 05:46:16.326237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.342 [2024-12-07 05:46:16.326249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.343 [2024-12-07 05:46:16.326257] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.343 [2024-12-07 05:46:16.326418] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.343 [2024-12-07 05:46:16.326561] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.343 [2024-12-07 05:46:16.326568] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.343 [2024-12-07 05:46:16.326575] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.343 [2024-12-07 05:46:16.328808] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.343 [2024-12-07 05:46:16.338069] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.343 [2024-12-07 05:46:16.338568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.343 [2024-12-07 05:46:16.338781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.343 [2024-12-07 05:46:16.338790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.343 [2024-12-07 05:46:16.338798] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.343 [2024-12-07 05:46:16.338960] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.343 [2024-12-07 05:46:16.339109] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.343 [2024-12-07 05:46:16.339117] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.343 [2024-12-07 05:46:16.339124] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.343 [2024-12-07 05:46:16.341481] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.343 [2024-12-07 05:46:16.350644] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.343 [2024-12-07 05:46:16.351249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.343 [2024-12-07 05:46:16.351571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.343 [2024-12-07 05:46:16.351584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.343 [2024-12-07 05:46:16.351599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.343 [2024-12-07 05:46:16.351724] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.343 [2024-12-07 05:46:16.351890] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.343 [2024-12-07 05:46:16.351898] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.343 [2024-12-07 05:46:16.351906] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.343 [2024-12-07 05:46:16.354077] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.343 [2024-12-07 05:46:16.363193] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.343 [2024-12-07 05:46:16.363777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.343 [2024-12-07 05:46:16.364126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.343 [2024-12-07 05:46:16.364141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.343 [2024-12-07 05:46:16.364150] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.343 [2024-12-07 05:46:16.364294] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.343 [2024-12-07 05:46:16.364478] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.343 [2024-12-07 05:46:16.364486] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.343 [2024-12-07 05:46:16.364494] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.343 [2024-12-07 05:46:16.366677] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.343 [2024-12-07 05:46:16.375601] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.343 [2024-12-07 05:46:16.376206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.343 [2024-12-07 05:46:16.376601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.343 [2024-12-07 05:46:16.376614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.343 [2024-12-07 05:46:16.376623] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.343 [2024-12-07 05:46:16.376787] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.343 [2024-12-07 05:46:16.376952] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.343 [2024-12-07 05:46:16.376960] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.343 [2024-12-07 05:46:16.376968] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.343 [2024-12-07 05:46:16.379177] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.343 [2024-12-07 05:46:16.388116] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.343 [2024-12-07 05:46:16.388520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.343 [2024-12-07 05:46:16.388758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.343 [2024-12-07 05:46:16.388771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.343 [2024-12-07 05:46:16.388781] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.343 [2024-12-07 05:46:16.388952] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.343 [2024-12-07 05:46:16.389109] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.343 [2024-12-07 05:46:16.389119] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.343 [2024-12-07 05:46:16.389127] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.343 [2024-12-07 05:46:16.391326] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.343 [2024-12-07 05:46:16.400567] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.343 [2024-12-07 05:46:16.401142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.343 [2024-12-07 05:46:16.401465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.343 [2024-12-07 05:46:16.401478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.343 [2024-12-07 05:46:16.401488] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.343 [2024-12-07 05:46:16.401688] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.343 [2024-12-07 05:46:16.401852] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.343 [2024-12-07 05:46:16.401860] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.343 [2024-12-07 05:46:16.401868] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.343 [2024-12-07 05:46:16.404095] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.343 [2024-12-07 05:46:16.413153] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.343 [2024-12-07 05:46:16.413752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.343 [2024-12-07 05:46:16.414080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.343 [2024-12-07 05:46:16.414094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.343 [2024-12-07 05:46:16.414104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.343 [2024-12-07 05:46:16.414303] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.343 [2024-12-07 05:46:16.414467] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.343 [2024-12-07 05:46:16.414476] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.343 [2024-12-07 05:46:16.414483] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.343 [2024-12-07 05:46:16.416811] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.343 [2024-12-07 05:46:16.425616] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.343 [2024-12-07 05:46:16.426197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.343 [2024-12-07 05:46:16.426508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.343 [2024-12-07 05:46:16.426522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.343 [2024-12-07 05:46:16.426532] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.343 [2024-12-07 05:46:16.426733] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.343 [2024-12-07 05:46:16.426903] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.343 [2024-12-07 05:46:16.426911] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.343 [2024-12-07 05:46:16.426919] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.344 [2024-12-07 05:46:16.429275] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.344 [2024-12-07 05:46:16.438142] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.344 [2024-12-07 05:46:16.438622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.344 [2024-12-07 05:46:16.438949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.344 [2024-12-07 05:46:16.438962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.344 [2024-12-07 05:46:16.438972] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.344 [2024-12-07 05:46:16.439144] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.344 [2024-12-07 05:46:16.439254] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.344 [2024-12-07 05:46:16.439262] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.344 [2024-12-07 05:46:16.439271] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.344 [2024-12-07 05:46:16.441601] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.344 [2024-12-07 05:46:16.450661] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.344 [2024-12-07 05:46:16.451239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.344 [2024-12-07 05:46:16.451560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.344 [2024-12-07 05:46:16.451573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.344 [2024-12-07 05:46:16.451582] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.344 [2024-12-07 05:46:16.451708] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.344 [2024-12-07 05:46:16.451892] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.344 [2024-12-07 05:46:16.451900] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.344 [2024-12-07 05:46:16.451908] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.344 [2024-12-07 05:46:16.454282] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.344 [2024-12-07 05:46:16.463151] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.344 [2024-12-07 05:46:16.463713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.344 [2024-12-07 05:46:16.464032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.344 [2024-12-07 05:46:16.464046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.344 [2024-12-07 05:46:16.464056] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.344 [2024-12-07 05:46:16.464182] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.344 [2024-12-07 05:46:16.464348] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.344 [2024-12-07 05:46:16.464360] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.344 [2024-12-07 05:46:16.464368] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.344 [2024-12-07 05:46:16.466698] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.344 [2024-12-07 05:46:16.475599] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.344 [2024-12-07 05:46:16.476183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.344 [2024-12-07 05:46:16.476505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.344 [2024-12-07 05:46:16.476518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.344 [2024-12-07 05:46:16.476528] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.344 [2024-12-07 05:46:16.476690] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.344 [2024-12-07 05:46:16.476836] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.344 [2024-12-07 05:46:16.476845] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.344 [2024-12-07 05:46:16.476852] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.344 [2024-12-07 05:46:16.479448] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.344 [2024-12-07 05:46:16.488054] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.344 [2024-12-07 05:46:16.488619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.344 [2024-12-07 05:46:16.488941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.344 [2024-12-07 05:46:16.488954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.344 [2024-12-07 05:46:16.488964] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.344 [2024-12-07 05:46:16.489136] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.344 [2024-12-07 05:46:16.489320] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.344 [2024-12-07 05:46:16.489329] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.344 [2024-12-07 05:46:16.489336] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.344 [2024-12-07 05:46:16.491665] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.344 [2024-12-07 05:46:16.500498] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.344 [2024-12-07 05:46:16.500989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.344 [2024-12-07 05:46:16.501327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.344 [2024-12-07 05:46:16.501338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.344 [2024-12-07 05:46:16.501346] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.344 [2024-12-07 05:46:16.501508] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.344 [2024-12-07 05:46:16.501596] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.344 [2024-12-07 05:46:16.501604] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.344 [2024-12-07 05:46:16.501616] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.344 [2024-12-07 05:46:16.503815] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.344 [2024-12-07 05:46:16.512975] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.344 [2024-12-07 05:46:16.513407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.344 [2024-12-07 05:46:16.513727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.344 [2024-12-07 05:46:16.513741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.344 [2024-12-07 05:46:16.513750] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.344 [2024-12-07 05:46:16.513931] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.344 [2024-12-07 05:46:16.514106] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.344 [2024-12-07 05:46:16.514115] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.344 [2024-12-07 05:46:16.514122] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.344 [2024-12-07 05:46:16.516488] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.344 [2024-12-07 05:46:16.525613] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.344 [2024-12-07 05:46:16.526114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.344 [2024-12-07 05:46:16.526487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.344 [2024-12-07 05:46:16.526501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.344 [2024-12-07 05:46:16.526511] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.344 [2024-12-07 05:46:16.526692] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.344 [2024-12-07 05:46:16.526838] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.344 [2024-12-07 05:46:16.526846] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.344 [2024-12-07 05:46:16.526854] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.344 [2024-12-07 05:46:16.529171] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.344 [2024-12-07 05:46:16.538195] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.344 [2024-12-07 05:46:16.538715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.344 [2024-12-07 05:46:16.539028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.344 [2024-12-07 05:46:16.539039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.344 [2024-12-07 05:46:16.539048] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.344 [2024-12-07 05:46:16.539173] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.344 [2024-12-07 05:46:16.539334] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.344 [2024-12-07 05:46:16.539343] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.344 [2024-12-07 05:46:16.539350] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.344 [2024-12-07 05:46:16.541694] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.344 [2024-12-07 05:46:16.550682] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.344 [2024-12-07 05:46:16.551265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.345 [2024-12-07 05:46:16.551592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.345 [2024-12-07 05:46:16.551606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.345 [2024-12-07 05:46:16.551616] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.345 [2024-12-07 05:46:16.551759] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.345 [2024-12-07 05:46:16.551942] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.345 [2024-12-07 05:46:16.551951] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.345 [2024-12-07 05:46:16.551958] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.345 [2024-12-07 05:46:16.554043] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.345 [2024-12-07 05:46:16.563177] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.345 [2024-12-07 05:46:16.563753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.345 [2024-12-07 05:46:16.564079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.345 [2024-12-07 05:46:16.564096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.345 [2024-12-07 05:46:16.564106] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.345 [2024-12-07 05:46:16.564251] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.345 [2024-12-07 05:46:16.564434] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.345 [2024-12-07 05:46:16.564442] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.345 [2024-12-07 05:46:16.564450] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.345 [2024-12-07 05:46:16.566486] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.345 [2024-12-07 05:46:16.575670] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.345 [2024-12-07 05:46:16.576303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.345 [2024-12-07 05:46:16.576624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.345 [2024-12-07 05:46:16.576638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.345 [2024-12-07 05:46:16.576647] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.345 [2024-12-07 05:46:16.576792] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.345 [2024-12-07 05:46:16.576975] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.345 [2024-12-07 05:46:16.576983] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.345 [2024-12-07 05:46:16.576991] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.606 [2024-12-07 05:46:16.579182] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.606 [2024-12-07 05:46:16.588147] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.606 [2024-12-07 05:46:16.588660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-12-07 05:46:16.588974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-12-07 05:46:16.588988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.606 [2024-12-07 05:46:16.588997] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.606 [2024-12-07 05:46:16.589222] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.606 [2024-12-07 05:46:16.589388] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.606 [2024-12-07 05:46:16.589396] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.606 [2024-12-07 05:46:16.589405] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.606 [2024-12-07 05:46:16.591772] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.606 [2024-12-07 05:46:16.600582] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.606 [2024-12-07 05:46:16.601059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-12-07 05:46:16.601384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-12-07 05:46:16.601395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.606 [2024-12-07 05:46:16.601403] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.607 [2024-12-07 05:46:16.601547] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.607 [2024-12-07 05:46:16.601672] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.607 [2024-12-07 05:46:16.601681] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.607 [2024-12-07 05:46:16.601688] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.607 [2024-12-07 05:46:16.603947] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.607 [2024-12-07 05:46:16.613062] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.607 [2024-12-07 05:46:16.613585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-12-07 05:46:16.613892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-12-07 05:46:16.613905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.607 [2024-12-07 05:46:16.613915] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.607 [2024-12-07 05:46:16.614065] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.607 [2024-12-07 05:46:16.614213] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.607 [2024-12-07 05:46:16.614221] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.607 [2024-12-07 05:46:16.614229] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.607 [2024-12-07 05:46:16.616378] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.607 [2024-12-07 05:46:16.625697] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.607 [2024-12-07 05:46:16.626155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-12-07 05:46:16.626467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-12-07 05:46:16.626478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.607 [2024-12-07 05:46:16.626486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.607 [2024-12-07 05:46:16.626647] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.607 [2024-12-07 05:46:16.626772] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.607 [2024-12-07 05:46:16.626780] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.607 [2024-12-07 05:46:16.626787] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.607 [2024-12-07 05:46:16.629156] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.607 [2024-12-07 05:46:16.638112] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.607 [2024-12-07 05:46:16.638450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-12-07 05:46:16.638752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-12-07 05:46:16.638762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.607 [2024-12-07 05:46:16.638769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.607 [2024-12-07 05:46:16.638949] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.607 [2024-12-07 05:46:16.639118] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.607 [2024-12-07 05:46:16.639127] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.607 [2024-12-07 05:46:16.639134] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.607 [2024-12-07 05:46:16.641168] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.607 [2024-12-07 05:46:16.650560] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.607 [2024-12-07 05:46:16.651053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-12-07 05:46:16.651341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-12-07 05:46:16.651352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.607 [2024-12-07 05:46:16.651359] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.607 [2024-12-07 05:46:16.651503] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.607 [2024-12-07 05:46:16.651682] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.607 [2024-12-07 05:46:16.651689] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.607 [2024-12-07 05:46:16.651696] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.607 [2024-12-07 05:46:16.653765] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.607 [2024-12-07 05:46:16.663338] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.607 [2024-12-07 05:46:16.663866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-12-07 05:46:16.664190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-12-07 05:46:16.664209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.607 [2024-12-07 05:46:16.664219] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.607 [2024-12-07 05:46:16.664363] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.607 [2024-12-07 05:46:16.664509] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.607 [2024-12-07 05:46:16.664517] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.607 [2024-12-07 05:46:16.664525] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.607 [2024-12-07 05:46:16.666763] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.607 [2024-12-07 05:46:16.675735] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.607 [2024-12-07 05:46:16.676338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-12-07 05:46:16.676651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-12-07 05:46:16.676664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.607 [2024-12-07 05:46:16.676674] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.607 [2024-12-07 05:46:16.676818] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.607 [2024-12-07 05:46:16.676984] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.607 [2024-12-07 05:46:16.676993] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.607 [2024-12-07 05:46:16.677000] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.607 [2024-12-07 05:46:16.679375] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.607 [2024-12-07 05:46:16.688279] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.607 [2024-12-07 05:46:16.688879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-12-07 05:46:16.689214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-12-07 05:46:16.689229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.607 [2024-12-07 05:46:16.689238] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.607 [2024-12-07 05:46:16.689400] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.607 [2024-12-07 05:46:16.689565] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.607 [2024-12-07 05:46:16.689573] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.607 [2024-12-07 05:46:16.689581] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.607 [2024-12-07 05:46:16.691968] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.607 [2024-12-07 05:46:16.700836] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.607 [2024-12-07 05:46:16.701371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-12-07 05:46:16.701694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-12-07 05:46:16.701707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.607 [2024-12-07 05:46:16.701721] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.607 [2024-12-07 05:46:16.701884] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.607 [2024-12-07 05:46:16.702039] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.607 [2024-12-07 05:46:16.702048] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.607 [2024-12-07 05:46:16.702056] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.607 [2024-12-07 05:46:16.704073] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.607 [2024-12-07 05:46:16.713517] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.607 [2024-12-07 05:46:16.714054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-12-07 05:46:16.714454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-12-07 05:46:16.714468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.607 [2024-12-07 05:46:16.714478] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.607 [2024-12-07 05:46:16.714659] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.607 [2024-12-07 05:46:16.714805] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.607 [2024-12-07 05:46:16.714814] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.607 [2024-12-07 05:46:16.714821] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.607 [2024-12-07 05:46:16.717103] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.607 [2024-12-07 05:46:16.725978] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.607 [2024-12-07 05:46:16.726474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-12-07 05:46:16.726782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-12-07 05:46:16.726793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.607 [2024-12-07 05:46:16.726801] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.607 [2024-12-07 05:46:16.726963] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.608 [2024-12-07 05:46:16.727075] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.608 [2024-12-07 05:46:16.727084] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.608 [2024-12-07 05:46:16.727091] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.608 [2024-12-07 05:46:16.729377] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.608 [2024-12-07 05:46:16.738253] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.608 [2024-12-07 05:46:16.738739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-12-07 05:46:16.739051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-12-07 05:46:16.739062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.608 [2024-12-07 05:46:16.739070] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.608 [2024-12-07 05:46:16.739255] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.608 [2024-12-07 05:46:16.739436] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.608 [2024-12-07 05:46:16.739444] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.608 [2024-12-07 05:46:16.739451] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.608 [2024-12-07 05:46:16.741757] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.608 [2024-12-07 05:46:16.750742] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.608 [2024-12-07 05:46:16.751190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-12-07 05:46:16.751402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-12-07 05:46:16.751412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.608 [2024-12-07 05:46:16.751419] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.608 [2024-12-07 05:46:16.751599] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.608 [2024-12-07 05:46:16.751747] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.608 [2024-12-07 05:46:16.751755] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.608 [2024-12-07 05:46:16.751762] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.608 [2024-12-07 05:46:16.754053] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2026855 Killed "${NVMF_APP[@]}" "$@" 00:31:13.608 05:46:16 -- host/bdevperf.sh@36 -- # tgt_init 00:31:13.608 05:46:16 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:13.608 05:46:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:13.608 05:46:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:13.608 05:46:16 -- common/autotest_common.sh@10 -- # set +x 00:31:13.608 [2024-12-07 05:46:16.763383] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.608 [2024-12-07 05:46:16.763836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-12-07 05:46:16.764135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-12-07 05:46:16.764145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.608 [2024-12-07 05:46:16.764153] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.608 05:46:16 -- nvmf/common.sh@469 -- # nvmfpid=2028499 00:31:13.608 [2024-12-07 05:46:16.764260] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.608 [2024-12-07 05:46:16.764404] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.608 [2024-12-07 05:46:16.764411] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.608 [2024-12-07 05:46:16.764418] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.608 05:46:16 -- nvmf/common.sh@470 -- # waitforlisten 2028499 00:31:13.608 05:46:16 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:13.608 05:46:16 -- common/autotest_common.sh@829 -- # '[' -z 2028499 ']' 00:31:13.608 05:46:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:13.608 05:46:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:13.608 05:46:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:13.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:13.608 05:46:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:13.608 05:46:16 -- common/autotest_common.sh@10 -- # set +x 00:31:13.608 [2024-12-07 05:46:16.766614] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.608 [2024-12-07 05:46:16.776116] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.608 [2024-12-07 05:46:16.776592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-12-07 05:46:16.776807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-12-07 05:46:16.776817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.608 [2024-12-07 05:46:16.776825] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.608 [2024-12-07 05:46:16.776968] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.608 [2024-12-07 05:46:16.777118] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.608 [2024-12-07 05:46:16.777128] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.608 [2024-12-07 05:46:16.777135] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.608 [2024-12-07 05:46:16.779423] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.608 [2024-12-07 05:46:16.788545] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.608 [2024-12-07 05:46:16.789037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-12-07 05:46:16.789216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-12-07 05:46:16.789228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.608 [2024-12-07 05:46:16.789235] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.608 [2024-12-07 05:46:16.789399] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.608 [2024-12-07 05:46:16.789543] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.608 [2024-12-07 05:46:16.789551] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.608 [2024-12-07 05:46:16.789558] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.608 [2024-12-07 05:46:16.791846] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.608 [2024-12-07 05:46:16.801331] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.608 [2024-12-07 05:46:16.801679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-12-07 05:46:16.802000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-12-07 05:46:16.802017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.608 [2024-12-07 05:46:16.802025] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.608 [2024-12-07 05:46:16.802168] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.608 [2024-12-07 05:46:16.802329] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.608 [2024-12-07 05:46:16.802341] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.608 [2024-12-07 05:46:16.802348] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.608 [2024-12-07 05:46:16.804542] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.608 [2024-12-07 05:46:16.813861] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.608 [2024-12-07 05:46:16.814476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-12-07 05:46:16.814804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-12-07 05:46:16.814817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.608 [2024-12-07 05:46:16.814827] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.608 [2024-12-07 05:46:16.814971] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.608 [2024-12-07 05:46:16.815090] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.608 [2024-12-07 05:46:16.815100] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.608 [2024-12-07 05:46:16.815108] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.608 [2024-12-07 05:46:16.815166] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:31:13.608 [2024-12-07 05:46:16.815211] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:13.608 [2024-12-07 05:46:16.817380] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.608 [2024-12-07 05:46:16.826373] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.608 [2024-12-07 05:46:16.826914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-12-07 05:46:16.827150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-12-07 05:46:16.827164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.608 [2024-12-07 05:46:16.827175] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.608 [2024-12-07 05:46:16.827338] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.608 [2024-12-07 05:46:16.827522] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.608 [2024-12-07 05:46:16.827531] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.608 [2024-12-07 05:46:16.827539] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.608 [2024-12-07 05:46:16.829787] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.608 [2024-12-07 05:46:16.838824] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.608 [2024-12-07 05:46:16.839318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-12-07 05:46:16.839526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-12-07 05:46:16.839536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.609 [2024-12-07 05:46:16.839544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.609 [2024-12-07 05:46:16.839743] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.609 [2024-12-07 05:46:16.839874] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.609 [2024-12-07 05:46:16.839883] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.609 [2024-12-07 05:46:16.839890] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.609 [2024-12-07 05:46:16.842199] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.871 EAL: No free 2048 kB hugepages reported on node 1 00:31:13.871 [2024-12-07 05:46:16.851192] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.871 [2024-12-07 05:46:16.851541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-12-07 05:46:16.851917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-12-07 05:46:16.851927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.871 [2024-12-07 05:46:16.851935] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.871 [2024-12-07 05:46:16.852102] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.871 [2024-12-07 05:46:16.852283] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.871 [2024-12-07 05:46:16.852290] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.871 [2024-12-07 05:46:16.852298] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.871 [2024-12-07 05:46:16.854641] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.871 [2024-12-07 05:46:16.863547] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.871 [2024-12-07 05:46:16.864064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-12-07 05:46:16.864418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-12-07 05:46:16.864429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.871 [2024-12-07 05:46:16.864437] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.871 [2024-12-07 05:46:16.864580] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.871 [2024-12-07 05:46:16.864741] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.871 [2024-12-07 05:46:16.864749] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.871 [2024-12-07 05:46:16.864757] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.871 [2024-12-07 05:46:16.867213] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.871 [2024-12-07 05:46:16.875998] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.871 [2024-12-07 05:46:16.876457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-12-07 05:46:16.876758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-12-07 05:46:16.876770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.871 [2024-12-07 05:46:16.876777] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.871 [2024-12-07 05:46:16.876920] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.871 [2024-12-07 05:46:16.877106] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.871 [2024-12-07 05:46:16.877118] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.871 [2024-12-07 05:46:16.877126] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.871 [2024-12-07 05:46:16.879396] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.871 [2024-12-07 05:46:16.888409] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.871 [2024-12-07 05:46:16.888957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-12-07 05:46:16.889406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-12-07 05:46:16.889420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.871 [2024-12-07 05:46:16.889430] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.871 [2024-12-07 05:46:16.889594] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.871 [2024-12-07 05:46:16.889740] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.871 [2024-12-07 05:46:16.889749] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.871 [2024-12-07 05:46:16.889757] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.871 [2024-12-07 05:46:16.891886] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.871 [2024-12-07 05:46:16.901047] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.871 [2024-12-07 05:46:16.901500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-12-07 05:46:16.901573] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:13.871 [2024-12-07 05:46:16.901807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-12-07 05:46:16.901817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.871 [2024-12-07 05:46:16.901825] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.871 [2024-12-07 05:46:16.901968] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.871 [2024-12-07 05:46:16.902117] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.871 [2024-12-07 05:46:16.902125] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.871 [2024-12-07 05:46:16.902132] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.871 [2024-12-07 05:46:16.904275] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.871 [2024-12-07 05:46:16.913370] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.871 [2024-12-07 05:46:16.913812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-12-07 05:46:16.914031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.871 [2024-12-07 05:46:16.914042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.871 [2024-12-07 05:46:16.914050] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.871 [2024-12-07 05:46:16.914194] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.872 [2024-12-07 05:46:16.914319] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.872 [2024-12-07 05:46:16.914333] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.872 [2024-12-07 05:46:16.914340] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.872 [2024-12-07 05:46:16.916742] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.872 [2024-12-07 05:46:16.925864] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.872 [2024-12-07 05:46:16.926566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-12-07 05:46:16.926894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-12-07 05:46:16.926907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.872 [2024-12-07 05:46:16.926917] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.872 [2024-12-07 05:46:16.927127] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.872 [2024-12-07 05:46:16.927293] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.872 [2024-12-07 05:46:16.927302] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.872 [2024-12-07 05:46:16.927310] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.872 [2024-12-07 05:46:16.929417] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.872 [2024-12-07 05:46:16.938325] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.872 [2024-12-07 05:46:16.938849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-12-07 05:46:16.939109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-12-07 05:46:16.939120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.872 [2024-12-07 05:46:16.939128] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.872 [2024-12-07 05:46:16.939255] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.872 [2024-12-07 05:46:16.939434] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.872 [2024-12-07 05:46:16.939442] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.872 [2024-12-07 05:46:16.939450] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.872 [2024-12-07 05:46:16.941796] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.872 [2024-12-07 05:46:16.950803] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.872 [2024-12-07 05:46:16.951289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-12-07 05:46:16.951606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-12-07 05:46:16.951616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.872 [2024-12-07 05:46:16.951623] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.872 [2024-12-07 05:46:16.951768] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.872 [2024-12-07 05:46:16.951930] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.872 [2024-12-07 05:46:16.951937] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.872 [2024-12-07 05:46:16.951950] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.872 [2024-12-07 05:46:16.954030] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:13.872 [2024-12-07 05:46:16.954115] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:13.872 [2024-12-07 05:46:16.954121] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:13.872 [2024-12-07 05:46:16.954126] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:13.872 [2024-12-07 05:46:16.954244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:13.872 [2024-12-07 05:46:16.954280] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.872 [2024-12-07 05:46:16.954477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:13.872 [2024-12-07 05:46:16.954478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:13.872 [2024-12-07 05:46:16.963382] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.872 [2024-12-07 05:46:16.963740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-12-07 05:46:16.964090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-12-07 05:46:16.964101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.872 [2024-12-07 05:46:16.964109] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.872 [2024-12-07 05:46:16.964254] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.872 [2024-12-07 05:46:16.964433] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.872 [2024-12-07 05:46:16.964442] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.872 [2024-12-07 05:46:16.964450] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.872 [2024-12-07 05:46:16.966703] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.872 [2024-12-07 05:46:16.975960] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.872 [2024-12-07 05:46:16.976559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-12-07 05:46:16.976695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-12-07 05:46:16.976709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.872 [2024-12-07 05:46:16.976720] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.872 [2024-12-07 05:46:16.976870] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.872 [2024-12-07 05:46:16.977044] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.872 [2024-12-07 05:46:16.977053] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.872 [2024-12-07 05:46:16.977061] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.872 [2024-12-07 05:46:16.979393] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.872 [2024-12-07 05:46:16.988422] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.872 [2024-12-07 05:46:16.989000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-12-07 05:46:16.989431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-12-07 05:46:16.989444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.872 [2024-12-07 05:46:16.989465] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.872 [2024-12-07 05:46:16.989667] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.872 [2024-12-07 05:46:16.989833] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.872 [2024-12-07 05:46:16.989842] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.872 [2024-12-07 05:46:16.989850] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.872 [2024-12-07 05:46:16.991946] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.872 [2024-12-07 05:46:17.000772] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.872 [2024-12-07 05:46:17.001434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-12-07 05:46:17.001828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-12-07 05:46:17.001841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.872 [2024-12-07 05:46:17.001851] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.872 [2024-12-07 05:46:17.001996] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.872 [2024-12-07 05:46:17.002151] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.872 [2024-12-07 05:46:17.002160] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.872 [2024-12-07 05:46:17.002169] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.872 [2024-12-07 05:46:17.004389] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.872 [2024-12-07 05:46:17.013126] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.872 [2024-12-07 05:46:17.013452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-12-07 05:46:17.013739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.872 [2024-12-07 05:46:17.013749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.872 [2024-12-07 05:46:17.013758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.872 [2024-12-07 05:46:17.013900] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.872 [2024-12-07 05:46:17.014087] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.872 [2024-12-07 05:46:17.014096] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.872 [2024-12-07 05:46:17.014103] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.872 [2024-12-07 05:46:17.016117] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.873 [2024-12-07 05:46:17.025564] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.873 [2024-12-07 05:46:17.025958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-12-07 05:46:17.026136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-12-07 05:46:17.026147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.873 [2024-12-07 05:46:17.026155] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.873 [2024-12-07 05:46:17.026286] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.873 [2024-12-07 05:46:17.026412] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.873 [2024-12-07 05:46:17.026420] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.873 [2024-12-07 05:46:17.026428] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.873 [2024-12-07 05:46:17.028611] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.873 [2024-12-07 05:46:17.038274] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.873 [2024-12-07 05:46:17.038682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-12-07 05:46:17.039005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-12-07 05:46:17.039020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.873 [2024-12-07 05:46:17.039028] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.873 [2024-12-07 05:46:17.039172] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.873 [2024-12-07 05:46:17.039351] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.873 [2024-12-07 05:46:17.039358] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.873 [2024-12-07 05:46:17.039366] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.873 [2024-12-07 05:46:17.041614] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.873 [2024-12-07 05:46:17.050828] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.873 [2024-12-07 05:46:17.051259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-12-07 05:46:17.051564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-12-07 05:46:17.051574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.873 [2024-12-07 05:46:17.051581] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.873 [2024-12-07 05:46:17.051706] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.873 [2024-12-07 05:46:17.051829] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.873 [2024-12-07 05:46:17.051837] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.873 [2024-12-07 05:46:17.051844] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.873 [2024-12-07 05:46:17.054069] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.873 [2024-12-07 05:46:17.063415] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.873 [2024-12-07 05:46:17.063868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-12-07 05:46:17.064184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-12-07 05:46:17.064196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.873 [2024-12-07 05:46:17.064204] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.873 [2024-12-07 05:46:17.064348] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.873 [2024-12-07 05:46:17.064458] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.873 [2024-12-07 05:46:17.064466] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.873 [2024-12-07 05:46:17.064474] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.873 [2024-12-07 05:46:17.066742] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.873 [2024-12-07 05:46:17.075831] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.873 [2024-12-07 05:46:17.076390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-12-07 05:46:17.076598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-12-07 05:46:17.076612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.873 [2024-12-07 05:46:17.076621] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.873 [2024-12-07 05:46:17.076803] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.873 [2024-12-07 05:46:17.076913] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.873 [2024-12-07 05:46:17.076922] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.873 [2024-12-07 05:46:17.076930] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.873 [2024-12-07 05:46:17.079285] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.873 [2024-12-07 05:46:17.088286] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.873 [2024-12-07 05:46:17.088746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-12-07 05:46:17.088953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-12-07 05:46:17.088963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.873 [2024-12-07 05:46:17.088971] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.873 [2024-12-07 05:46:17.089139] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.873 [2024-12-07 05:46:17.089210] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.873 [2024-12-07 05:46:17.089218] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.873 [2024-12-07 05:46:17.089225] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.873 [2024-12-07 05:46:17.091498] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.873 [2024-12-07 05:46:17.100830] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.873 [2024-12-07 05:46:17.101346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-12-07 05:46:17.101751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.873 [2024-12-07 05:46:17.101760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:13.873 [2024-12-07 05:46:17.101768] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:13.873 [2024-12-07 05:46:17.101913] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:13.873 [2024-12-07 05:46:17.102062] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.873 [2024-12-07 05:46:17.102076] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.873 [2024-12-07 05:46:17.102083] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.873 [2024-12-07 05:46:17.104517] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.136 [2024-12-07 05:46:17.113314] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.136 [2024-12-07 05:46:17.113805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.136 [2024-12-07 05:46:17.114162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.136 [2024-12-07 05:46:17.114172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.136 [2024-12-07 05:46:17.114180] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.136 [2024-12-07 05:46:17.114378] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.136 [2024-12-07 05:46:17.114485] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.136 [2024-12-07 05:46:17.114492] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.136 [2024-12-07 05:46:17.114499] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.136 [2024-12-07 05:46:17.116767] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.136 [2024-12-07 05:46:17.125816] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.136 [2024-12-07 05:46:17.126281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.136 [2024-12-07 05:46:17.126491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.136 [2024-12-07 05:46:17.126502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.136 [2024-12-07 05:46:17.126510] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.136 [2024-12-07 05:46:17.126691] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.136 [2024-12-07 05:46:17.126815] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.136 [2024-12-07 05:46:17.126824] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.136 [2024-12-07 05:46:17.126831] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.136 [2024-12-07 05:46:17.129230] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.136 [2024-12-07 05:46:17.138249] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.136 [2024-12-07 05:46:17.138790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.136 [2024-12-07 05:46:17.139087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.136 [2024-12-07 05:46:17.139104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.136 [2024-12-07 05:46:17.139112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.136 [2024-12-07 05:46:17.139254] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.136 [2024-12-07 05:46:17.139452] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.136 [2024-12-07 05:46:17.139460] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.136 [2024-12-07 05:46:17.139471] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.136 [2024-12-07 05:46:17.141626] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.136 [2024-12-07 05:46:17.150657] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.136 [2024-12-07 05:46:17.151006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.136 [2024-12-07 05:46:17.151341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.136 [2024-12-07 05:46:17.151351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.136 [2024-12-07 05:46:17.151359] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.137 [2024-12-07 05:46:17.151503] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.137 [2024-12-07 05:46:17.151664] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.137 [2024-12-07 05:46:17.151672] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.137 [2024-12-07 05:46:17.151679] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.137 [2024-12-07 05:46:17.154060] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.137 [2024-12-07 05:46:17.163103] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.137 [2024-12-07 05:46:17.163563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.137 [2024-12-07 05:46:17.163876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.137 [2024-12-07 05:46:17.163887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.137 [2024-12-07 05:46:17.163894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.137 [2024-12-07 05:46:17.164062] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.137 [2024-12-07 05:46:17.164189] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.137 [2024-12-07 05:46:17.164197] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.137 [2024-12-07 05:46:17.164204] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.137 [2024-12-07 05:46:17.166581] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.137 [2024-12-07 05:46:17.175608] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.137 [2024-12-07 05:46:17.176091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.137 [2024-12-07 05:46:17.176466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.137 [2024-12-07 05:46:17.176479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.137 [2024-12-07 05:46:17.176489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.137 [2024-12-07 05:46:17.176652] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.137 [2024-12-07 05:46:17.176780] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.137 [2024-12-07 05:46:17.176788] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.137 [2024-12-07 05:46:17.176796] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.137 [2024-12-07 05:46:17.178965] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.137 [2024-12-07 05:46:17.188020] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.137 [2024-12-07 05:46:17.188490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.137 [2024-12-07 05:46:17.188824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.137 [2024-12-07 05:46:17.188837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.137 [2024-12-07 05:46:17.188847] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.137 [2024-12-07 05:46:17.189054] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.137 [2024-12-07 05:46:17.189275] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.137 [2024-12-07 05:46:17.189284] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.137 [2024-12-07 05:46:17.189291] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.137 [2024-12-07 05:46:17.191603] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.137 [2024-12-07 05:46:17.200710] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.137 [2024-12-07 05:46:17.201299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.137 [2024-12-07 05:46:17.201646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.137 [2024-12-07 05:46:17.201660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.137 [2024-12-07 05:46:17.201670] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.137 [2024-12-07 05:46:17.201851] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.137 [2024-12-07 05:46:17.201961] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.137 [2024-12-07 05:46:17.201970] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.137 [2024-12-07 05:46:17.201978] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.137 [2024-12-07 05:46:17.204148] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.137 [2024-12-07 05:46:17.213285] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.137 [2024-12-07 05:46:17.213779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.137 [2024-12-07 05:46:17.214097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.137 [2024-12-07 05:46:17.214108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.137 [2024-12-07 05:46:17.214117] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.137 [2024-12-07 05:46:17.214278] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.137 [2024-12-07 05:46:17.214422] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.137 [2024-12-07 05:46:17.214430] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.137 [2024-12-07 05:46:17.214437] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.137 [2024-12-07 05:46:17.216704] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.137 [2024-12-07 05:46:17.225918] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.137 [2024-12-07 05:46:17.226475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.137 [2024-12-07 05:46:17.226803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.137 [2024-12-07 05:46:17.226816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.137 [2024-12-07 05:46:17.226826] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.137 [2024-12-07 05:46:17.226969] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.137 [2024-12-07 05:46:17.227160] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.137 [2024-12-07 05:46:17.227169] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.137 [2024-12-07 05:46:17.227177] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.137 [2024-12-07 05:46:17.229339] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.137 [2024-12-07 05:46:17.238308] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.137 [2024-12-07 05:46:17.238810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.137 [2024-12-07 05:46:17.239157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.137 [2024-12-07 05:46:17.239172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.137 [2024-12-07 05:46:17.239181] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.137 [2024-12-07 05:46:17.239325] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.137 [2024-12-07 05:46:17.239416] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.137 [2024-12-07 05:46:17.239424] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.137 [2024-12-07 05:46:17.239432] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.137 [2024-12-07 05:46:17.241800] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.137 [2024-12-07 05:46:17.250772] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.137 [2024-12-07 05:46:17.251333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.137 [2024-12-07 05:46:17.251663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.137 [2024-12-07 05:46:17.251676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.137 [2024-12-07 05:46:17.251686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.137 [2024-12-07 05:46:17.251866] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.137 [2024-12-07 05:46:17.251976] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.137 [2024-12-07 05:46:17.251985] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.137 [2024-12-07 05:46:17.251992] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.137 [2024-12-07 05:46:17.254271] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.137 [2024-12-07 05:46:17.263386] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.137 [2024-12-07 05:46:17.263954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.137 [2024-12-07 05:46:17.264289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.137 [2024-12-07 05:46:17.264303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.138 [2024-12-07 05:46:17.264312] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.138 [2024-12-07 05:46:17.264457] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.138 [2024-12-07 05:46:17.264566] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.138 [2024-12-07 05:46:17.264574] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.138 [2024-12-07 05:46:17.264582] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.138 [2024-12-07 05:46:17.266839] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.138 [2024-12-07 05:46:17.276112] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.138 [2024-12-07 05:46:17.276622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.138 [2024-12-07 05:46:17.276947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.138 [2024-12-07 05:46:17.276957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.138 [2024-12-07 05:46:17.276965] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.138 [2024-12-07 05:46:17.277113] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.138 [2024-12-07 05:46:17.277221] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.138 [2024-12-07 05:46:17.277229] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.138 [2024-12-07 05:46:17.277236] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.138 [2024-12-07 05:46:17.279597] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.138 [2024-12-07 05:46:17.288574] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.138 [2024-12-07 05:46:17.289130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.138 [2024-12-07 05:46:17.289408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.138 [2024-12-07 05:46:17.289422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.138 [2024-12-07 05:46:17.289432] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.138 [2024-12-07 05:46:17.289596] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.138 [2024-12-07 05:46:17.289743] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.138 [2024-12-07 05:46:17.289751] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.138 [2024-12-07 05:46:17.289759] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.138 [2024-12-07 05:46:17.292076] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.138 [2024-12-07 05:46:17.301142] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.138 [2024-12-07 05:46:17.301606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.138 [2024-12-07 05:46:17.301936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.138 [2024-12-07 05:46:17.301947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.138 [2024-12-07 05:46:17.301955] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.138 [2024-12-07 05:46:17.302048] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.138 [2024-12-07 05:46:17.302173] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.138 [2024-12-07 05:46:17.302182] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.138 [2024-12-07 05:46:17.302189] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.138 [2024-12-07 05:46:17.304534] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.138 [2024-12-07 05:46:17.313481] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.138 [2024-12-07 05:46:17.313933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.138 [2024-12-07 05:46:17.314142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.138 [2024-12-07 05:46:17.314153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.138 [2024-12-07 05:46:17.314161] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.138 [2024-12-07 05:46:17.314267] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.138 [2024-12-07 05:46:17.314392] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.138 [2024-12-07 05:46:17.314400] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.138 [2024-12-07 05:46:17.314407] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.138 [2024-12-07 05:46:17.316623] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.138 [2024-12-07 05:46:17.325930] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.138 [2024-12-07 05:46:17.326391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.138 [2024-12-07 05:46:17.326724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.138 [2024-12-07 05:46:17.326737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.138 [2024-12-07 05:46:17.326747] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.138 [2024-12-07 05:46:17.326854] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.138 [2024-12-07 05:46:17.327000] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.138 [2024-12-07 05:46:17.327008] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.138 [2024-12-07 05:46:17.327028] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.138 [2024-12-07 05:46:17.329284] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.138 [2024-12-07 05:46:17.338451] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.138 [2024-12-07 05:46:17.338904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.138 [2024-12-07 05:46:17.339222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.138 [2024-12-07 05:46:17.339233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.138 [2024-12-07 05:46:17.339246] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.138 [2024-12-07 05:46:17.339390] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.138 [2024-12-07 05:46:17.339496] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.138 [2024-12-07 05:46:17.339505] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.138 [2024-12-07 05:46:17.339512] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.138 [2024-12-07 05:46:17.341709] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.138 [2024-12-07 05:46:17.350922] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.138 [2024-12-07 05:46:17.351430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.138 [2024-12-07 05:46:17.351753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.138 [2024-12-07 05:46:17.351763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.138 [2024-12-07 05:46:17.351771] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.138 [2024-12-07 05:46:17.351931] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.138 [2024-12-07 05:46:17.352024] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.138 [2024-12-07 05:46:17.352032] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.138 [2024-12-07 05:46:17.352039] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.138 [2024-12-07 05:46:17.354251] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.138 [2024-12-07 05:46:17.363486] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.138 [2024-12-07 05:46:17.363937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.138 [2024-12-07 05:46:17.364128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.138 [2024-12-07 05:46:17.364138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.138 [2024-12-07 05:46:17.364146] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.138 [2024-12-07 05:46:17.364271] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.138 [2024-12-07 05:46:17.364414] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.138 [2024-12-07 05:46:17.364423] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.138 [2024-12-07 05:46:17.364430] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.138 [2024-12-07 05:46:17.366706] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.401 [2024-12-07 05:46:17.375942] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.401 [2024-12-07 05:46:17.376441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.401 [2024-12-07 05:46:17.376701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.401 [2024-12-07 05:46:17.376711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.401 [2024-12-07 05:46:17.376719] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.401 [2024-12-07 05:46:17.376866] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.401 [2024-12-07 05:46:17.376955] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.401 [2024-12-07 05:46:17.376971] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.401 [2024-12-07 05:46:17.376979] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.401 [2024-12-07 05:46:17.379327] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.401 [2024-12-07 05:46:17.388201] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.401 [2024-12-07 05:46:17.388659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.402 [2024-12-07 05:46:17.389037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.402 [2024-12-07 05:46:17.389048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.402 [2024-12-07 05:46:17.389055] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.402 [2024-12-07 05:46:17.389216] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.402 [2024-12-07 05:46:17.389323] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.402 [2024-12-07 05:46:17.389330] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.402 [2024-12-07 05:46:17.389337] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.402 [2024-12-07 05:46:17.391664] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.402 [2024-12-07 05:46:17.400678] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.402 [2024-12-07 05:46:17.401009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.402 [2024-12-07 05:46:17.401345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.402 [2024-12-07 05:46:17.401354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.402 [2024-12-07 05:46:17.401362] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.402 [2024-12-07 05:46:17.401541] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.402 [2024-12-07 05:46:17.401721] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.402 [2024-12-07 05:46:17.401729] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.402 [2024-12-07 05:46:17.401736] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.402 [2024-12-07 05:46:17.403892] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.402 [2024-12-07 05:46:17.413088] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.402 [2024-12-07 05:46:17.413534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.402 [2024-12-07 05:46:17.413834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.402 [2024-12-07 05:46:17.413843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.402 [2024-12-07 05:46:17.413851] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.402 [2024-12-07 05:46:17.413978] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.402 [2024-12-07 05:46:17.414163] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.402 [2024-12-07 05:46:17.414171] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.402 [2024-12-07 05:46:17.414179] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.402 [2024-12-07 05:46:17.416375] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.402 [2024-12-07 05:46:17.425292] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.402 [2024-12-07 05:46:17.425713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.402 [2024-12-07 05:46:17.425890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.402 [2024-12-07 05:46:17.425900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.402 [2024-12-07 05:46:17.425908] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.402 [2024-12-07 05:46:17.426112] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.402 [2024-12-07 05:46:17.426275] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.402 [2024-12-07 05:46:17.426282] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.402 [2024-12-07 05:46:17.426289] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.402 [2024-12-07 05:46:17.428411] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.402 [2024-12-07 05:46:17.437849] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.402 [2024-12-07 05:46:17.438449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.402 [2024-12-07 05:46:17.438781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.402 [2024-12-07 05:46:17.438795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.402 [2024-12-07 05:46:17.438804] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.402 [2024-12-07 05:46:17.438967] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.402 [2024-12-07 05:46:17.439122] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.402 [2024-12-07 05:46:17.439131] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.402 [2024-12-07 05:46:17.439139] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.402 [2024-12-07 05:46:17.441341] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.402 [2024-12-07 05:46:17.450322] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.402 [2024-12-07 05:46:17.450908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.402 [2024-12-07 05:46:17.451248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.402 [2024-12-07 05:46:17.451264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.402 [2024-12-07 05:46:17.451273] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.402 [2024-12-07 05:46:17.451417] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.402 [2024-12-07 05:46:17.451568] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.402 [2024-12-07 05:46:17.451577] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.402 [2024-12-07 05:46:17.451584] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.402 [2024-12-07 05:46:17.454119] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.402 [2024-12-07 05:46:17.462871] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.402 [2024-12-07 05:46:17.463343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.402 [2024-12-07 05:46:17.463678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.402 [2024-12-07 05:46:17.463691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.402 [2024-12-07 05:46:17.463701] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.402 [2024-12-07 05:46:17.463864] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.402 [2024-12-07 05:46:17.464036] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.402 [2024-12-07 05:46:17.464045] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.402 [2024-12-07 05:46:17.464053] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.402 [2024-12-07 05:46:17.466417] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.402 [2024-12-07 05:46:17.475496] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.402 [2024-12-07 05:46:17.476043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.402 [2024-12-07 05:46:17.476391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.402 [2024-12-07 05:46:17.476404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.402 [2024-12-07 05:46:17.476414] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.402 [2024-12-07 05:46:17.476576] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.402 [2024-12-07 05:46:17.476740] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.402 [2024-12-07 05:46:17.476748] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.402 [2024-12-07 05:46:17.476756] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.402 [2024-12-07 05:46:17.478739] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.402 [2024-12-07 05:46:17.488176] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.402 [2024-12-07 05:46:17.488765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.402 [2024-12-07 05:46:17.488962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.402 [2024-12-07 05:46:17.488975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.402 [2024-12-07 05:46:17.488985] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.402 [2024-12-07 05:46:17.489135] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.402 [2024-12-07 05:46:17.489283] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.402 [2024-12-07 05:46:17.489299] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.403 [2024-12-07 05:46:17.489311] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.403 [2024-12-07 05:46:17.491600] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.403 [2024-12-07 05:46:17.500451] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.403 [2024-12-07 05:46:17.501051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.403 [2024-12-07 05:46:17.501406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.403 [2024-12-07 05:46:17.501419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.403 [2024-12-07 05:46:17.501428] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.403 [2024-12-07 05:46:17.501572] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.403 [2024-12-07 05:46:17.501737] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.403 [2024-12-07 05:46:17.501745] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.403 [2024-12-07 05:46:17.501753] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.403 [2024-12-07 05:46:17.503903] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.403 [2024-12-07 05:46:17.512919] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.403 [2024-12-07 05:46:17.513471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.403 [2024-12-07 05:46:17.513802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.403 [2024-12-07 05:46:17.513816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.403 [2024-12-07 05:46:17.513826] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.403 [2024-12-07 05:46:17.513933] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.403 [2024-12-07 05:46:17.514051] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.403 [2024-12-07 05:46:17.514061] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.403 [2024-12-07 05:46:17.514069] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.403 [2024-12-07 05:46:17.516272] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.403 [2024-12-07 05:46:17.525722] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.403 [2024-12-07 05:46:17.526141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.403 [2024-12-07 05:46:17.526496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.403 [2024-12-07 05:46:17.526509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.403 [2024-12-07 05:46:17.526519] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.403 [2024-12-07 05:46:17.526627] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.403 [2024-12-07 05:46:17.526774] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.403 [2024-12-07 05:46:17.526783] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.403 [2024-12-07 05:46:17.526799] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.403 [2024-12-07 05:46:17.529114] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.403 [2024-12-07 05:46:17.538383] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.403 [2024-12-07 05:46:17.538845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.403 [2024-12-07 05:46:17.539148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.403 [2024-12-07 05:46:17.539163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.403 [2024-12-07 05:46:17.539173] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.403 [2024-12-07 05:46:17.539336] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.403 [2024-12-07 05:46:17.539483] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.403 [2024-12-07 05:46:17.539492] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.403 [2024-12-07 05:46:17.539500] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.403 [2024-12-07 05:46:17.541775] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.403 [2024-12-07 05:46:17.550973] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.403 [2024-12-07 05:46:17.551545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.403 [2024-12-07 05:46:17.551745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.403 [2024-12-07 05:46:17.551758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.403 [2024-12-07 05:46:17.551768] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.403 [2024-12-07 05:46:17.551930] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.403 [2024-12-07 05:46:17.552084] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.403 [2024-12-07 05:46:17.552095] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.403 [2024-12-07 05:46:17.552103] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.403 [2024-12-07 05:46:17.554394] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.403 [2024-12-07 05:46:17.563436] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.403 [2024-12-07 05:46:17.563951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.403 [2024-12-07 05:46:17.564257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.403 [2024-12-07 05:46:17.564269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.403 [2024-12-07 05:46:17.564276] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.403 [2024-12-07 05:46:17.564384] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.403 [2024-12-07 05:46:17.564527] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.403 [2024-12-07 05:46:17.564536] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.403 [2024-12-07 05:46:17.564543] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.403 [2024-12-07 05:46:17.566756] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.403 [2024-12-07 05:46:17.575951] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.403 [2024-12-07 05:46:17.576276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.403 [2024-12-07 05:46:17.576627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.403 [2024-12-07 05:46:17.576637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.403 [2024-12-07 05:46:17.576644] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.403 [2024-12-07 05:46:17.576769] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.403 [2024-12-07 05:46:17.576894] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.403 [2024-12-07 05:46:17.576902] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.403 [2024-12-07 05:46:17.576909] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.403 [2024-12-07 05:46:17.579145] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.403 [2024-12-07 05:46:17.588385] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.403 [2024-12-07 05:46:17.588840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.403 [2024-12-07 05:46:17.589015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.403 [2024-12-07 05:46:17.589026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.403 [2024-12-07 05:46:17.589034] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.403 [2024-12-07 05:46:17.589176] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.403 [2024-12-07 05:46:17.589301] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.403 [2024-12-07 05:46:17.589309] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.403 [2024-12-07 05:46:17.589316] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.403 [2024-12-07 05:46:17.591603] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.403 05:46:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:14.403 05:46:17 -- common/autotest_common.sh@862 -- # return 0 00:31:14.403 05:46:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:14.403 05:46:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:14.403 05:46:17 -- common/autotest_common.sh@10 -- # set +x 00:31:14.403 [2024-12-07 05:46:17.600683] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.403 [2024-12-07 05:46:17.601102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.403 [2024-12-07 05:46:17.601496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.403 [2024-12-07 05:46:17.601509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.403 [2024-12-07 05:46:17.601519] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.404 [2024-12-07 05:46:17.601718] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.404 [2024-12-07 05:46:17.601865] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.404 [2024-12-07 05:46:17.601875] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.404 [2024-12-07 05:46:17.601887] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.404 [2024-12-07 05:46:17.604263] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.404 [2024-12-07 05:46:17.613191] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.404 [2024-12-07 05:46:17.613495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.404 [2024-12-07 05:46:17.613837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.404 [2024-12-07 05:46:17.613848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.404 [2024-12-07 05:46:17.613857] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.404 [2024-12-07 05:46:17.613981] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.404 [2024-12-07 05:46:17.614148] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.404 [2024-12-07 05:46:17.614157] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.404 [2024-12-07 05:46:17.614165] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.404 [2024-12-07 05:46:17.616472] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.404 [2024-12-07 05:46:17.625769] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.404 [2024-12-07 05:46:17.626297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.404 [2024-12-07 05:46:17.626629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.404 [2024-12-07 05:46:17.626640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.404 [2024-12-07 05:46:17.626648] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.404 [2024-12-07 05:46:17.626864] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.404 [2024-12-07 05:46:17.626952] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.404 [2024-12-07 05:46:17.626961] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.404 [2024-12-07 05:46:17.626968] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.404 [2024-12-07 05:46:17.629294] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.404 05:46:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:14.404 05:46:17 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:14.404 05:46:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.404 05:46:17 -- common/autotest_common.sh@10 -- # set +x 00:31:14.404 [2024-12-07 05:46:17.637350] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:14.666 [2024-12-07 05:46:17.638150] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.666 [2024-12-07 05:46:17.638672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.666 [2024-12-07 05:46:17.638995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.666 [2024-12-07 05:46:17.639005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.666 [2024-12-07 05:46:17.639018] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.666 [2024-12-07 05:46:17.639106] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.666 [2024-12-07 05:46:17.639324] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.666 [2024-12-07 05:46:17.639335] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.666 [2024-12-07 05:46:17.639342] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.666 [2024-12-07 05:46:17.641665] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.666 05:46:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.666 05:46:17 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:14.666 05:46:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.666 05:46:17 -- common/autotest_common.sh@10 -- # set +x 00:31:14.666 [2024-12-07 05:46:17.650579] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.666 [2024-12-07 05:46:17.650906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.666 [2024-12-07 05:46:17.651235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.666 [2024-12-07 05:46:17.651246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.666 [2024-12-07 05:46:17.651254] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.666 [2024-12-07 05:46:17.651342] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.666 [2024-12-07 05:46:17.651465] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.666 [2024-12-07 05:46:17.651473] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.666 [2024-12-07 05:46:17.651480] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.666 [2024-12-07 05:46:17.653711] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.666 [2024-12-07 05:46:17.662917] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.666 [2024-12-07 05:46:17.663364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.666 [2024-12-07 05:46:17.663621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.666 [2024-12-07 05:46:17.663642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.666 [2024-12-07 05:46:17.663652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.666 [2024-12-07 05:46:17.663834] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.666 [2024-12-07 05:46:17.663982] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.666 [2024-12-07 05:46:17.663990] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.666 [2024-12-07 05:46:17.663998] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.666 [2024-12-07 05:46:17.666373] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.666 Malloc0 00:31:14.666 05:46:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.666 05:46:17 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:14.666 05:46:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.666 05:46:17 -- common/autotest_common.sh@10 -- # set +x 00:31:14.666 [2024-12-07 05:46:17.675274] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.666 [2024-12-07 05:46:17.675781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.666 [2024-12-07 05:46:17.676105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.666 [2024-12-07 05:46:17.676119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.666 [2024-12-07 05:46:17.676132] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.666 [2024-12-07 05:46:17.676276] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.666 [2024-12-07 05:46:17.676475] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.666 [2024-12-07 05:46:17.676483] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.666 [2024-12-07 05:46:17.676490] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.666 [2024-12-07 05:46:17.678842] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.666 05:46:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.666 05:46:17 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:14.666 05:46:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.666 05:46:17 -- common/autotest_common.sh@10 -- # set +x 00:31:14.666 [2024-12-07 05:46:17.687831] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.666 [2024-12-07 05:46:17.688392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.666 [2024-12-07 05:46:17.688612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.666 [2024-12-07 05:46:17.688627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12578f0 with addr=10.0.0.2, port=4420 00:31:14.666 [2024-12-07 05:46:17.688636] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12578f0 is same with the state(5) to be set 00:31:14.666 [2024-12-07 05:46:17.688853] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.666 [2024-12-07 05:46:17.689001] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.666 [2024-12-07 05:46:17.689009] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.666 [2024-12-07 05:46:17.689025] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.666 [2024-12-07 05:46:17.691315] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.666 05:46:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.666 05:46:17 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:14.666 05:46:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.666 05:46:17 -- common/autotest_common.sh@10 -- # set +x 00:31:14.666 [2024-12-07 05:46:17.700170] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.666 [2024-12-07 05:46:17.700590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.666 [2024-12-07 05:46:17.700614] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:14.666 [2024-12-07 05:46:17.702885] posix.c: 670:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:31:14.666 [2024-12-07 05:46:17.702925] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (107): Transport endpoint is not connected 00:31:14.666 [2024-12-07 05:46:17.703146] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12578f0 (9): Bad file descriptor 00:31:14.666 [2024-12-07 05:46:17.703275] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.667 [2024-12-07 05:46:17.703291] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.667 [2024-12-07 05:46:17.703300] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.667 05:46:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.667 [2024-12-07 05:46:17.705704] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.667 05:46:17 -- host/bdevperf.sh@38 -- # wait 2027440 00:31:14.667 [2024-12-07 05:46:17.712609] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.667 [2024-12-07 05:46:17.786171] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:24.669 00:31:24.669 Latency(us) 00:31:24.669 [2024-12-07T04:46:27.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:24.669 [2024-12-07T04:46:27.909Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:24.669 Verification LBA range: start 0x0 length 0x4000 00:31:24.669 Nvme1n1 : 15.00 14715.24 57.48 15073.46 0.00 4282.09 573.44 13981.01 00:31:24.669 [2024-12-07T04:46:27.909Z] =================================================================================================================== 00:31:24.669 [2024-12-07T04:46:27.909Z] Total : 14715.24 57.48 15073.46 0.00 4282.09 573.44 13981.01 00:31:24.669 05:46:26 -- host/bdevperf.sh@39 -- # sync 00:31:24.669 05:46:26 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:24.669 05:46:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.669 05:46:26 -- common/autotest_common.sh@10 -- # set +x 00:31:24.669 05:46:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.669 05:46:26 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:31:24.669 05:46:26 -- host/bdevperf.sh@44 -- # nvmftestfini 00:31:24.669 05:46:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:24.669 05:46:26 -- nvmf/common.sh@116 -- # sync 00:31:24.669 05:46:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:24.669 05:46:26 -- nvmf/common.sh@119 -- # set +e 00:31:24.669 05:46:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:24.669 05:46:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:24.669 rmmod nvme_tcp 00:31:24.669 rmmod nvme_fabrics 00:31:24.669 rmmod nvme_keyring 00:31:24.669 05:46:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:24.669 05:46:26 -- nvmf/common.sh@123 -- # set -e 00:31:24.669 05:46:26 -- nvmf/common.sh@124 -- # return 0 00:31:24.669 05:46:26 -- nvmf/common.sh@477 -- # '[' -n 2028499 ']' 00:31:24.669 05:46:26 -- nvmf/common.sh@478 -- # killprocess 2028499 00:31:24.669 05:46:26 -- common/autotest_common.sh@936 -- # '[' -z 2028499 ']' 00:31:24.669 05:46:26 -- common/autotest_common.sh@940 -- # kill -0 2028499 00:31:24.669 05:46:26 -- common/autotest_common.sh@941 -- # uname 00:31:24.669 05:46:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:24.669 05:46:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2028499 00:31:24.669 05:46:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:31:24.669 05:46:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:31:24.669 05:46:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2028499' 00:31:24.669 killing process with pid 2028499 00:31:24.669 05:46:26 -- common/autotest_common.sh@955 -- # kill 2028499 00:31:24.669 05:46:26 -- common/autotest_common.sh@960 -- # wait 2028499 00:31:24.669 05:46:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:24.669 05:46:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:24.669 05:46:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:24.669 05:46:26 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:24.669 05:46:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:24.669 05:46:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:24.669 05:46:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:24.669 05:46:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:25.611 05:46:28 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:25.611 00:31:25.611 real 0m28.210s 00:31:25.611 user 1m3.213s 00:31:25.611 sys 0m7.429s 00:31:25.611 05:46:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:25.611 05:46:28 -- common/autotest_common.sh@10 -- # set +x 00:31:25.611 ************************************ 00:31:25.611 END TEST nvmf_bdevperf 00:31:25.611 ************************************ 00:31:25.611 05:46:28 -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:25.611 05:46:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:31:25.611 05:46:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:25.611 05:46:28 -- common/autotest_common.sh@10 -- # set +x 00:31:25.611 ************************************ 00:31:25.611 START TEST nvmf_target_disconnect 00:31:25.611 ************************************ 00:31:25.611 05:46:28 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:25.611 * Looking for test storage... 00:31:25.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:25.611 05:46:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:31:25.611 05:46:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:31:25.611 05:46:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:31:25.611 05:46:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:31:25.611 05:46:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:31:25.611 05:46:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:31:25.611 05:46:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:31:25.611 05:46:28 -- scripts/common.sh@335 -- # IFS=.-: 00:31:25.611 05:46:28 -- scripts/common.sh@335 -- # read -ra ver1 00:31:25.611 05:46:28 -- scripts/common.sh@336 -- # IFS=.-: 00:31:25.611 05:46:28 -- scripts/common.sh@336 -- # read -ra ver2 00:31:25.611 05:46:28 -- scripts/common.sh@337 -- # local 'op=<' 00:31:25.611 05:46:28 -- scripts/common.sh@339 -- # ver1_l=2 00:31:25.611 05:46:28 -- scripts/common.sh@340 -- # ver2_l=1 00:31:25.611 05:46:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:31:25.611 05:46:28 -- scripts/common.sh@343 -- # case "$op" in 00:31:25.611 05:46:28 -- scripts/common.sh@344 -- # : 1 00:31:25.611 05:46:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:31:25.611 05:46:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:25.611 05:46:28 -- scripts/common.sh@364 -- # decimal 1 00:31:25.611 05:46:28 -- scripts/common.sh@352 -- # local d=1 00:31:25.611 05:46:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:25.611 05:46:28 -- scripts/common.sh@354 -- # echo 1 00:31:25.872 05:46:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:31:25.872 05:46:28 -- scripts/common.sh@365 -- # decimal 2 00:31:25.872 05:46:28 -- scripts/common.sh@352 -- # local d=2 00:31:25.872 05:46:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:25.872 05:46:28 -- scripts/common.sh@354 -- # echo 2 00:31:25.872 05:46:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:31:25.872 05:46:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:31:25.872 05:46:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:31:25.872 05:46:28 -- scripts/common.sh@367 -- # return 0 00:31:25.872 05:46:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:25.872 05:46:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:31:25.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.872 --rc genhtml_branch_coverage=1 00:31:25.872 --rc genhtml_function_coverage=1 00:31:25.872 --rc genhtml_legend=1 00:31:25.872 --rc geninfo_all_blocks=1 00:31:25.872 --rc geninfo_unexecuted_blocks=1 00:31:25.872 00:31:25.872 ' 00:31:25.872 05:46:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:31:25.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.872 --rc genhtml_branch_coverage=1 00:31:25.872 --rc genhtml_function_coverage=1 00:31:25.872 --rc genhtml_legend=1 00:31:25.873 --rc geninfo_all_blocks=1 00:31:25.873 --rc geninfo_unexecuted_blocks=1 00:31:25.873 00:31:25.873 ' 00:31:25.873 05:46:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:31:25.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.873 --rc genhtml_branch_coverage=1 00:31:25.873 --rc genhtml_function_coverage=1 00:31:25.873 --rc genhtml_legend=1 00:31:25.873 --rc geninfo_all_blocks=1 00:31:25.873 --rc geninfo_unexecuted_blocks=1 00:31:25.873 00:31:25.873 ' 00:31:25.873 05:46:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:31:25.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.873 --rc genhtml_branch_coverage=1 00:31:25.873 --rc genhtml_function_coverage=1 00:31:25.873 --rc genhtml_legend=1 00:31:25.873 --rc geninfo_all_blocks=1 00:31:25.873 --rc geninfo_unexecuted_blocks=1 00:31:25.873 00:31:25.873 ' 00:31:25.873 05:46:28 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:25.873 05:46:28 -- nvmf/common.sh@7 -- # uname -s 00:31:25.873 05:46:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:25.873 05:46:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:25.873 05:46:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:25.873 05:46:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:25.873 05:46:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:25.873 05:46:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:25.873 05:46:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:25.873 05:46:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:25.873 05:46:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:25.873 05:46:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:25.873 05:46:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:25.873 05:46:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:25.873 05:46:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:25.873 05:46:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:25.873 05:46:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:25.873 05:46:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:25.873 05:46:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:25.873 05:46:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:25.873 05:46:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:25.873 05:46:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.873 05:46:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.873 05:46:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.873 05:46:28 -- paths/export.sh@5 -- # export PATH 00:31:25.873 05:46:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.873 05:46:28 -- nvmf/common.sh@46 -- # : 0 00:31:25.873 05:46:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:25.873 05:46:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:25.873 05:46:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:25.873 05:46:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:25.873 05:46:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:25.873 05:46:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:25.873 05:46:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:25.873 05:46:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:25.873 05:46:28 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:25.873 05:46:28 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:31:25.873 05:46:28 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:31:25.873 05:46:28 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:31:25.873 05:46:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:25.873 05:46:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:25.873 05:46:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:25.873 05:46:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:25.873 05:46:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:25.873 05:46:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.873 05:46:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:25.873 05:46:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:25.873 05:46:28 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:25.873 05:46:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:25.873 05:46:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:25.873 05:46:28 -- common/autotest_common.sh@10 -- # set +x 00:31:34.013 05:46:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:34.013 05:46:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:34.013 05:46:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:34.013 05:46:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:34.013 05:46:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:34.013 05:46:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:34.013 05:46:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:34.013 05:46:36 -- nvmf/common.sh@294 -- # net_devs=() 00:31:34.013 05:46:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:34.013 05:46:36 -- nvmf/common.sh@295 -- # e810=() 00:31:34.013 05:46:36 -- nvmf/common.sh@295 -- # local -ga e810 00:31:34.013 05:46:36 -- nvmf/common.sh@296 -- # x722=() 00:31:34.013 05:46:36 -- nvmf/common.sh@296 -- # local -ga x722 00:31:34.013 05:46:36 -- nvmf/common.sh@297 -- # mlx=() 00:31:34.013 05:46:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:34.013 05:46:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:34.013 05:46:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:34.013 05:46:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:34.013 05:46:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:34.013 05:46:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:34.013 05:46:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:34.013 05:46:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:34.013 05:46:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:34.013 05:46:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:34.013 05:46:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:34.013 05:46:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:34.013 05:46:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:34.013 05:46:36 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:34.013 05:46:36 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:34.013 05:46:36 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:34.013 05:46:36 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:34.013 05:46:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:34.013 05:46:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:34.013 05:46:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:34.013 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:34.013 05:46:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:34.013 05:46:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:34.013 05:46:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:34.013 05:46:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:34.013 05:46:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:34.013 05:46:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:34.013 05:46:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:34.013 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:34.013 05:46:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:34.013 05:46:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:34.013 05:46:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:34.013 05:46:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:34.013 05:46:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:34.013 05:46:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:34.013 05:46:36 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:34.013 05:46:36 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:34.013 05:46:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:34.014 05:46:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:34.014 05:46:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:34.014 05:46:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:34.014 05:46:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:34.014 Found net devices under 0000:31:00.0: cvl_0_0 00:31:34.014 05:46:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:34.014 05:46:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:34.014 05:46:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:34.014 05:46:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:34.014 05:46:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:34.014 05:46:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:34.014 Found net devices under 0000:31:00.1: cvl_0_1 00:31:34.014 05:46:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:34.014 05:46:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:34.014 05:46:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:34.014 05:46:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:34.014 05:46:36 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:34.014 05:46:36 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:34.014 05:46:36 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:34.014 05:46:36 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:34.014 05:46:36 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:34.014 05:46:36 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:34.014 05:46:36 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:34.014 05:46:36 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:34.014 05:46:36 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:34.014 05:46:36 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:34.014 05:46:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:34.014 05:46:36 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:34.014 05:46:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:34.014 05:46:36 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:34.014 05:46:36 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:34.014 05:46:36 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:34.014 05:46:36 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:34.014 05:46:36 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:34.014 05:46:36 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:34.014 05:46:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:34.014 05:46:36 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:34.014 05:46:36 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:34.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:34.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:31:34.014 00:31:34.014 --- 10.0.0.2 ping statistics --- 00:31:34.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:34.014 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:31:34.014 05:46:36 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:34.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:34.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:31:34.014 00:31:34.014 --- 10.0.0.1 ping statistics --- 00:31:34.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:34.014 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:31:34.014 05:46:36 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:34.014 05:46:36 -- nvmf/common.sh@410 -- # return 0 00:31:34.014 05:46:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:34.014 05:46:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:34.014 05:46:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:34.014 05:46:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:34.014 05:46:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:34.014 05:46:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:34.014 05:46:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:34.014 05:46:36 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:31:34.014 05:46:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:34.014 05:46:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:34.014 05:46:36 -- common/autotest_common.sh@10 -- # set +x 00:31:34.014 ************************************ 00:31:34.014 START TEST nvmf_target_disconnect_tc1 00:31:34.014 ************************************ 00:31:34.014 05:46:36 -- common/autotest_common.sh@1114 -- # nvmf_target_disconnect_tc1 00:31:34.014 05:46:36 -- host/target_disconnect.sh@32 -- # set +e 00:31:34.014 05:46:36 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:34.014 EAL: No free 2048 kB hugepages reported on node 1 00:31:34.014 [2024-12-07 05:46:36.499064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.014 [2024-12-07 05:46:36.499473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.014 [2024-12-07 05:46:36.499489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e26300 with addr=10.0.0.2, port=4420 00:31:34.014 [2024-12-07 05:46:36.499514] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:34.014 [2024-12-07 05:46:36.499525] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:34.014 [2024-12-07 05:46:36.499534] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:31:34.014 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:31:34.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:31:34.014 Initializing NVMe Controllers 00:31:34.014 05:46:36 -- host/target_disconnect.sh@33 -- # trap - ERR 00:31:34.014 05:46:36 -- host/target_disconnect.sh@33 -- # print_backtrace 00:31:34.014 05:46:36 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:31:34.014 05:46:36 -- common/autotest_common.sh@1142 -- # return 0 00:31:34.014 05:46:36 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:31:34.014 05:46:36 -- host/target_disconnect.sh@41 -- # set -e 00:31:34.014 00:31:34.014 real 0m0.102s 00:31:34.014 user 0m0.042s 00:31:34.014 sys 0m0.059s 00:31:34.014 05:46:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:34.014 05:46:36 -- common/autotest_common.sh@10 -- # set +x 00:31:34.014 ************************************ 00:31:34.014 END TEST nvmf_target_disconnect_tc1 00:31:34.014 ************************************ 00:31:34.014 05:46:36 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:31:34.014 05:46:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:34.014 05:46:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:34.014 05:46:36 -- common/autotest_common.sh@10 -- # set +x 00:31:34.014 ************************************ 00:31:34.014 START TEST nvmf_target_disconnect_tc2 00:31:34.014 ************************************ 00:31:34.014 05:46:36 -- common/autotest_common.sh@1114 -- # nvmf_target_disconnect_tc2 00:31:34.014 05:46:36 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:31:34.014 05:46:36 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:34.014 05:46:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:34.014 05:46:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:34.014 05:46:36 -- common/autotest_common.sh@10 -- # set +x 00:31:34.014 05:46:36 -- nvmf/common.sh@469 -- # nvmfpid=2034626 00:31:34.014 05:46:36 -- nvmf/common.sh@470 -- # waitforlisten 2034626 00:31:34.014 05:46:36 -- common/autotest_common.sh@829 -- # '[' -z 2034626 ']' 00:31:34.014 05:46:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:34.014 05:46:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:34.014 05:46:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:34.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:34.014 05:46:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:34.014 05:46:36 -- common/autotest_common.sh@10 -- # set +x 00:31:34.014 05:46:36 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:34.014 [2024-12-07 05:46:36.605119] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:31:34.014 [2024-12-07 05:46:36.605184] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:34.014 EAL: No free 2048 kB hugepages reported on node 1 00:31:34.014 [2024-12-07 05:46:36.696560] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:34.014 [2024-12-07 05:46:36.788904] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:34.014 [2024-12-07 05:46:36.789063] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:34.014 [2024-12-07 05:46:36.789079] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:34.014 [2024-12-07 05:46:36.789088] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:34.014 [2024-12-07 05:46:36.789568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:31:34.014 [2024-12-07 05:46:36.789699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:31:34.014 [2024-12-07 05:46:36.789867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:31:34.015 [2024-12-07 05:46:36.789902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:31:34.275 05:46:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:34.275 05:46:37 -- common/autotest_common.sh@862 -- # return 0 00:31:34.275 05:46:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:34.275 05:46:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:34.275 05:46:37 -- common/autotest_common.sh@10 -- # set +x 00:31:34.275 05:46:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:34.275 05:46:37 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:34.275 05:46:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.275 05:46:37 -- common/autotest_common.sh@10 -- # set +x 00:31:34.275 Malloc0 00:31:34.275 05:46:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.275 05:46:37 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:34.275 05:46:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.275 05:46:37 -- common/autotest_common.sh@10 -- # set +x 00:31:34.275 [2024-12-07 05:46:37.473274] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:34.275 05:46:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.275 05:46:37 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:34.275 05:46:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.275 05:46:37 -- common/autotest_common.sh@10 -- # set +x 00:31:34.275 05:46:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.275 05:46:37 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:34.275 05:46:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.275 05:46:37 -- common/autotest_common.sh@10 -- # set +x 00:31:34.275 05:46:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.275 05:46:37 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:34.275 05:46:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.275 05:46:37 -- common/autotest_common.sh@10 -- # set +x 00:31:34.537 [2024-12-07 05:46:37.513712] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:34.537 05:46:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.537 05:46:37 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:34.537 05:46:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.537 05:46:37 -- common/autotest_common.sh@10 -- # set +x 00:31:34.537 05:46:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.537 05:46:37 -- host/target_disconnect.sh@50 -- # reconnectpid=2034960 00:31:34.537 05:46:37 -- host/target_disconnect.sh@52 -- # sleep 2 00:31:34.537 05:46:37 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:34.537 EAL: No free 2048 kB hugepages reported on node 1 00:31:36.456 05:46:39 -- host/target_disconnect.sh@53 -- # kill -9 2034626 00:31:36.456 05:46:39 -- host/target_disconnect.sh@55 -- # sleep 2 00:31:36.456 Read completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Read completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Read completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Read completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Read completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Read completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Read completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Read completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Read completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Read completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Read completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Read completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Read completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Read completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Read completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Write completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Write completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Write completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Write completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Write completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Write completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Read completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Write completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Write completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Write completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Read completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Write completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Write completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Read completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Write completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Write completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Read completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 [2024-12-07 05:46:39.546389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:36.456 Read completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Read completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Read completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Read completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Read completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Read completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Read completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Read completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Read completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Read completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Read completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Write completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Write completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Write completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Write completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Write completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Read completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Read completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Read completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Write completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Read completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Write completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.456 Write completed with error (sct=0, sc=8) 00:31:36.456 starting I/O failed 00:31:36.457 Read completed with error (sct=0, sc=8) 00:31:36.457 starting I/O failed 00:31:36.457 Read completed with error (sct=0, sc=8) 00:31:36.457 starting I/O failed 00:31:36.457 Write completed with error (sct=0, sc=8) 00:31:36.457 starting I/O failed 00:31:36.457 Read completed with error (sct=0, sc=8) 00:31:36.457 starting I/O failed 00:31:36.457 Read completed with error (sct=0, sc=8) 00:31:36.457 starting I/O failed 00:31:36.457 Write completed with error (sct=0, sc=8) 00:31:36.457 starting I/O failed 00:31:36.457 Write completed with error (sct=0, sc=8) 00:31:36.457 starting I/O failed 00:31:36.457 Read completed with error (sct=0, sc=8) 00:31:36.457 starting I/O failed 00:31:36.457 Write completed with error (sct=0, sc=8) 00:31:36.457 starting I/O failed 00:31:36.457 [2024-12-07 05:46:39.546713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.457 [2024-12-07 05:46:39.547109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.547329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.547342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.457 qpair failed and we were unable to recover it. 00:31:36.457 [2024-12-07 05:46:39.547529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.547836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.547847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.457 qpair failed and we were unable to recover it. 00:31:36.457 [2024-12-07 05:46:39.548061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.548279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.548289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.457 qpair failed and we were unable to recover it. 00:31:36.457 [2024-12-07 05:46:39.548579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.548877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.548887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.457 qpair failed and we were unable to recover it. 00:31:36.457 [2024-12-07 05:46:39.549308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.549637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.549650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.457 qpair failed and we were unable to recover it. 00:31:36.457 [2024-12-07 05:46:39.549814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.550118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.550130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.457 qpair failed and we were unable to recover it. 00:31:36.457 [2024-12-07 05:46:39.550448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.550738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.550748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.457 qpair failed and we were unable to recover it. 00:31:36.457 [2024-12-07 05:46:39.550938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.551220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.551231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.457 qpair failed and we were unable to recover it. 00:31:36.457 [2024-12-07 05:46:39.551538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.551797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.551807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.457 qpair failed and we were unable to recover it. 00:31:36.457 [2024-12-07 05:46:39.552102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.552417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.552427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.457 qpair failed and we were unable to recover it. 00:31:36.457 [2024-12-07 05:46:39.552627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.552911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.552921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.457 qpair failed and we were unable to recover it. 00:31:36.457 [2024-12-07 05:46:39.553239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.553523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.553538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.457 qpair failed and we were unable to recover it. 00:31:36.457 [2024-12-07 05:46:39.553743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.553936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.553947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.457 qpair failed and we were unable to recover it. 00:31:36.457 [2024-12-07 05:46:39.554158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.554378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.554388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.457 qpair failed and we were unable to recover it. 00:31:36.457 [2024-12-07 05:46:39.554703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.554908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.554919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.457 qpair failed and we were unable to recover it. 00:31:36.457 [2024-12-07 05:46:39.555234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.555526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.555536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.457 qpair failed and we were unable to recover it. 00:31:36.457 [2024-12-07 05:46:39.555820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.555998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.556008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.457 qpair failed and we were unable to recover it. 00:31:36.457 [2024-12-07 05:46:39.556405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.556726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.556736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.457 qpair failed and we were unable to recover it. 00:31:36.457 [2024-12-07 05:46:39.557023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.557354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.557365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.457 qpair failed and we were unable to recover it. 00:31:36.457 [2024-12-07 05:46:39.557603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.557895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.557905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.457 qpair failed and we were unable to recover it. 00:31:36.457 [2024-12-07 05:46:39.558225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.558554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.558564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.457 qpair failed and we were unable to recover it. 00:31:36.457 [2024-12-07 05:46:39.558848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.559126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.559136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.457 qpair failed and we were unable to recover it. 00:31:36.457 [2024-12-07 05:46:39.559437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.559730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.559740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.457 qpair failed and we were unable to recover it. 00:31:36.457 [2024-12-07 05:46:39.560053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.560318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.560327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.457 qpair failed and we were unable to recover it. 00:31:36.457 [2024-12-07 05:46:39.560607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.560795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.560804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.457 qpair failed and we were unable to recover it. 00:31:36.457 [2024-12-07 05:46:39.561096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.561362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.457 [2024-12-07 05:46:39.561372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.457 qpair failed and we were unable to recover it. 00:31:36.458 [2024-12-07 05:46:39.561696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.561878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.561890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.458 qpair failed and we were unable to recover it. 00:31:36.458 [2024-12-07 05:46:39.562205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.562496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.562507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.458 qpair failed and we were unable to recover it. 00:31:36.458 [2024-12-07 05:46:39.562705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.562919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.562928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.458 qpair failed and we were unable to recover it. 00:31:36.458 [2024-12-07 05:46:39.563111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.563399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.563409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.458 qpair failed and we were unable to recover it. 00:31:36.458 [2024-12-07 05:46:39.563693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.564003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.564016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.458 qpair failed and we were unable to recover it. 00:31:36.458 [2024-12-07 05:46:39.564294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.564577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.564586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.458 qpair failed and we were unable to recover it. 00:31:36.458 [2024-12-07 05:46:39.564772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.565036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.565046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.458 qpair failed and we were unable to recover it. 00:31:36.458 [2024-12-07 05:46:39.565336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.565646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.565655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.458 qpair failed and we were unable to recover it. 00:31:36.458 [2024-12-07 05:46:39.565942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.566251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.566260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.458 qpair failed and we were unable to recover it. 00:31:36.458 [2024-12-07 05:46:39.566552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.566693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.566703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.458 qpair failed and we were unable to recover it. 00:31:36.458 [2024-12-07 05:46:39.567001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.567340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.567349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.458 qpair failed and we were unable to recover it. 00:31:36.458 [2024-12-07 05:46:39.567631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.567813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.567823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.458 qpair failed and we were unable to recover it. 00:31:36.458 [2024-12-07 05:46:39.568096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.568389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.568398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.458 qpair failed and we were unable to recover it. 00:31:36.458 [2024-12-07 05:46:39.568686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.568979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.568989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.458 qpair failed and we were unable to recover it. 00:31:36.458 [2024-12-07 05:46:39.569275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.569548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.569557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.458 qpair failed and we were unable to recover it. 00:31:36.458 [2024-12-07 05:46:39.569837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.570132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.570142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.458 qpair failed and we were unable to recover it. 00:31:36.458 [2024-12-07 05:46:39.570427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.570710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.570719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.458 qpair failed and we were unable to recover it. 00:31:36.458 [2024-12-07 05:46:39.570893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.571211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.571221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.458 qpair failed and we were unable to recover it. 00:31:36.458 [2024-12-07 05:46:39.571505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.571839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.571849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.458 qpair failed and we were unable to recover it. 00:31:36.458 [2024-12-07 05:46:39.572137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.572432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.572443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.458 qpair failed and we were unable to recover it. 00:31:36.458 [2024-12-07 05:46:39.572736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.573057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.573067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.458 qpair failed and we were unable to recover it. 00:31:36.458 [2024-12-07 05:46:39.573254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.573571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.573580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.458 qpair failed and we were unable to recover it. 00:31:36.458 [2024-12-07 05:46:39.573864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.574176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.574186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.458 qpair failed and we were unable to recover it. 00:31:36.458 [2024-12-07 05:46:39.574476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.574766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.574775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.458 qpair failed and we were unable to recover it. 00:31:36.458 [2024-12-07 05:46:39.575078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.575368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.575377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.458 qpair failed and we were unable to recover it. 00:31:36.458 [2024-12-07 05:46:39.575664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.575992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.576001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.458 qpair failed and we were unable to recover it. 00:31:36.458 [2024-12-07 05:46:39.576319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.576620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.576630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.458 qpair failed and we were unable to recover it. 00:31:36.458 [2024-12-07 05:46:39.577018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.577329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.577338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.458 qpair failed and we were unable to recover it. 00:31:36.458 [2024-12-07 05:46:39.577631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.458 [2024-12-07 05:46:39.577922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.577931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.459 qpair failed and we were unable to recover it. 00:31:36.459 [2024-12-07 05:46:39.578220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.578543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.578552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.459 qpair failed and we were unable to recover it. 00:31:36.459 [2024-12-07 05:46:39.578835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.579149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.579159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.459 qpair failed and we were unable to recover it. 00:31:36.459 [2024-12-07 05:46:39.579449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.579745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.579755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.459 qpair failed and we were unable to recover it. 00:31:36.459 [2024-12-07 05:46:39.580059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.580408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.580417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.459 qpair failed and we were unable to recover it. 00:31:36.459 [2024-12-07 05:46:39.580729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.580918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.580928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.459 qpair failed and we were unable to recover it. 00:31:36.459 [2024-12-07 05:46:39.581238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.581411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.581420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.459 qpair failed and we were unable to recover it. 00:31:36.459 [2024-12-07 05:46:39.581712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.582038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.582049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.459 qpair failed and we were unable to recover it. 00:31:36.459 [2024-12-07 05:46:39.582340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.582619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.582632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.459 qpair failed and we were unable to recover it. 00:31:36.459 [2024-12-07 05:46:39.582906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.583263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.583274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.459 qpair failed and we were unable to recover it. 00:31:36.459 [2024-12-07 05:46:39.583561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.583875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.583885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.459 qpair failed and we were unable to recover it. 00:31:36.459 [2024-12-07 05:46:39.584177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.584538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.584547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.459 qpair failed and we were unable to recover it. 00:31:36.459 [2024-12-07 05:46:39.584732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.584917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.584927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.459 qpair failed and we were unable to recover it. 00:31:36.459 [2024-12-07 05:46:39.585237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.585536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.585546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.459 qpair failed and we were unable to recover it. 00:31:36.459 [2024-12-07 05:46:39.585882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.586170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.586180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.459 qpair failed and we were unable to recover it. 00:31:36.459 [2024-12-07 05:46:39.586500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.586781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.586791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.459 qpair failed and we were unable to recover it. 00:31:36.459 [2024-12-07 05:46:39.587117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.587436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.587446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.459 qpair failed and we were unable to recover it. 00:31:36.459 [2024-12-07 05:46:39.587739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.588026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.588037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.459 qpair failed and we were unable to recover it. 00:31:36.459 [2024-12-07 05:46:39.588330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.588727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.588736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.459 qpair failed and we were unable to recover it. 00:31:36.459 [2024-12-07 05:46:39.589041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.589343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.589353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.459 qpair failed and we were unable to recover it. 00:31:36.459 [2024-12-07 05:46:39.589648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.589934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.589943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.459 qpair failed and we were unable to recover it. 00:31:36.459 [2024-12-07 05:46:39.590287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.590552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.590562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.459 qpair failed and we were unable to recover it. 00:31:36.459 [2024-12-07 05:46:39.590853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.591025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.591035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.459 qpair failed and we were unable to recover it. 00:31:36.459 [2024-12-07 05:46:39.591259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.591572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.591582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.459 qpair failed and we were unable to recover it. 00:31:36.459 [2024-12-07 05:46:39.591882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.592176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.592185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.459 qpair failed and we were unable to recover it. 00:31:36.459 [2024-12-07 05:46:39.592473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.592789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.592798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.459 qpair failed and we were unable to recover it. 00:31:36.459 [2024-12-07 05:46:39.593066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.593384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.593393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.459 qpair failed and we were unable to recover it. 00:31:36.459 [2024-12-07 05:46:39.593676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.594007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.594021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.459 qpair failed and we were unable to recover it. 00:31:36.459 [2024-12-07 05:46:39.594311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.594634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.594644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.459 qpair failed and we were unable to recover it. 00:31:36.459 [2024-12-07 05:46:39.594942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.459 [2024-12-07 05:46:39.595160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.595170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.460 qpair failed and we were unable to recover it. 00:31:36.460 [2024-12-07 05:46:39.595481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.595743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.595754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.460 qpair failed and we were unable to recover it. 00:31:36.460 [2024-12-07 05:46:39.596084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.596358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.596367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.460 qpair failed and we were unable to recover it. 00:31:36.460 [2024-12-07 05:46:39.596667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.596976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.596986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.460 qpair failed and we were unable to recover it. 00:31:36.460 [2024-12-07 05:46:39.597172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.597435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.597444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.460 qpair failed and we were unable to recover it. 00:31:36.460 [2024-12-07 05:46:39.597783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.598078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.598088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.460 qpair failed and we were unable to recover it. 00:31:36.460 [2024-12-07 05:46:39.598383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.598685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.598695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.460 qpair failed and we were unable to recover it. 00:31:36.460 [2024-12-07 05:46:39.598980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.599355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.599365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.460 qpair failed and we were unable to recover it. 00:31:36.460 [2024-12-07 05:46:39.599659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.599946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.599956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.460 qpair failed and we were unable to recover it. 00:31:36.460 [2024-12-07 05:46:39.600253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.600461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.600470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.460 qpair failed and we were unable to recover it. 00:31:36.460 [2024-12-07 05:46:39.600799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.601094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.601104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.460 qpair failed and we were unable to recover it. 00:31:36.460 [2024-12-07 05:46:39.601393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.601732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.601742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.460 qpair failed and we were unable to recover it. 00:31:36.460 [2024-12-07 05:46:39.602042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.602359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.602368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.460 qpair failed and we were unable to recover it. 00:31:36.460 [2024-12-07 05:46:39.602652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.602953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.602963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.460 qpair failed and we were unable to recover it. 00:31:36.460 [2024-12-07 05:46:39.603260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.603581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.603591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.460 qpair failed and we were unable to recover it. 00:31:36.460 [2024-12-07 05:46:39.603775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.604125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.604134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.460 qpair failed and we were unable to recover it. 00:31:36.460 [2024-12-07 05:46:39.604459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.604778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.604787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.460 qpair failed and we were unable to recover it. 00:31:36.460 [2024-12-07 05:46:39.605068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.605273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.605282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.460 qpair failed and we were unable to recover it. 00:31:36.460 [2024-12-07 05:46:39.605622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.605935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.605953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.460 qpair failed and we were unable to recover it. 00:31:36.460 [2024-12-07 05:46:39.606264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.606573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.606583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.460 qpair failed and we were unable to recover it. 00:31:36.460 [2024-12-07 05:46:39.606877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.607245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.607257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.460 qpair failed and we were unable to recover it. 00:31:36.460 [2024-12-07 05:46:39.607545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.607838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.607847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.460 qpair failed and we were unable to recover it. 00:31:36.460 [2024-12-07 05:46:39.608141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.608469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.608480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.460 qpair failed and we were unable to recover it. 00:31:36.460 [2024-12-07 05:46:39.608766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.609081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.609091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.460 qpair failed and we were unable to recover it. 00:31:36.460 [2024-12-07 05:46:39.609381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.609663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.609673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.460 qpair failed and we were unable to recover it. 00:31:36.460 [2024-12-07 05:46:39.609981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.610152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.610163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.460 qpair failed and we were unable to recover it. 00:31:36.460 [2024-12-07 05:46:39.610455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.610787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.610797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.460 qpair failed and we were unable to recover it. 00:31:36.460 [2024-12-07 05:46:39.611077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.611247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.611258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.460 qpair failed and we were unable to recover it. 00:31:36.460 [2024-12-07 05:46:39.611437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.611716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.460 [2024-12-07 05:46:39.611726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.460 qpair failed and we were unable to recover it. 00:31:36.460 [2024-12-07 05:46:39.612009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.612208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.612218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.461 qpair failed and we were unable to recover it. 00:31:36.461 [2024-12-07 05:46:39.612509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.612791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.612804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.461 qpair failed and we were unable to recover it. 00:31:36.461 [2024-12-07 05:46:39.613103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.613514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.613531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.461 qpair failed and we were unable to recover it. 00:31:36.461 [2024-12-07 05:46:39.613808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.613998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.614007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.461 qpair failed and we were unable to recover it. 00:31:36.461 [2024-12-07 05:46:39.614299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.614617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.614628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.461 qpair failed and we were unable to recover it. 00:31:36.461 [2024-12-07 05:46:39.614907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.615207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.615217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.461 qpair failed and we were unable to recover it. 00:31:36.461 [2024-12-07 05:46:39.615517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.615800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.615810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.461 qpair failed and we were unable to recover it. 00:31:36.461 [2024-12-07 05:46:39.616078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.616397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.616407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.461 qpair failed and we were unable to recover it. 00:31:36.461 [2024-12-07 05:46:39.616742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.617055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.617065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.461 qpair failed and we were unable to recover it. 00:31:36.461 [2024-12-07 05:46:39.617370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.617683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.617692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.461 qpair failed and we were unable to recover it. 00:31:36.461 [2024-12-07 05:46:39.617965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.618248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.618258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.461 qpair failed and we were unable to recover it. 00:31:36.461 [2024-12-07 05:46:39.618463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.618686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.618696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.461 qpair failed and we were unable to recover it. 00:31:36.461 [2024-12-07 05:46:39.619021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.619229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.619239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.461 qpair failed and we were unable to recover it. 00:31:36.461 [2024-12-07 05:46:39.619565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.619871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.619880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.461 qpair failed and we were unable to recover it. 00:31:36.461 [2024-12-07 05:46:39.620052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.620410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.620419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.461 qpair failed and we were unable to recover it. 00:31:36.461 [2024-12-07 05:46:39.620717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.621023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.621033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.461 qpair failed and we were unable to recover it. 00:31:36.461 [2024-12-07 05:46:39.621342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.621641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.621651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.461 qpair failed and we were unable to recover it. 00:31:36.461 [2024-12-07 05:46:39.621956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.622297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.622308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.461 qpair failed and we were unable to recover it. 00:31:36.461 [2024-12-07 05:46:39.622596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.622921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.622931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.461 qpair failed and we were unable to recover it. 00:31:36.461 [2024-12-07 05:46:39.623231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.623572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.623581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.461 qpair failed and we were unable to recover it. 00:31:36.461 [2024-12-07 05:46:39.623887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.624177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.624187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.461 qpair failed and we were unable to recover it. 00:31:36.461 [2024-12-07 05:46:39.624355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.624616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.624626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.461 qpair failed and we were unable to recover it. 00:31:36.461 [2024-12-07 05:46:39.624950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.625172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.625181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.461 qpair failed and we were unable to recover it. 00:31:36.461 [2024-12-07 05:46:39.625391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.461 [2024-12-07 05:46:39.625674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.625683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.462 qpair failed and we were unable to recover it. 00:31:36.462 [2024-12-07 05:46:39.625892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.626092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.626102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.462 qpair failed and we were unable to recover it. 00:31:36.462 [2024-12-07 05:46:39.626409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.626695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.626704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.462 qpair failed and we were unable to recover it. 00:31:36.462 [2024-12-07 05:46:39.626892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.627221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.627230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.462 qpair failed and we were unable to recover it. 00:31:36.462 [2024-12-07 05:46:39.627542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.627859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.627869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.462 qpair failed and we were unable to recover it. 00:31:36.462 [2024-12-07 05:46:39.628164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.628479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.628489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.462 qpair failed and we were unable to recover it. 00:31:36.462 [2024-12-07 05:46:39.628788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.629074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.629089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.462 qpair failed and we were unable to recover it. 00:31:36.462 [2024-12-07 05:46:39.629264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.629553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.629563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.462 qpair failed and we were unable to recover it. 00:31:36.462 [2024-12-07 05:46:39.629882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.630177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.630187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.462 qpair failed and we were unable to recover it. 00:31:36.462 [2024-12-07 05:46:39.630502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.630813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.630823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.462 qpair failed and we were unable to recover it. 00:31:36.462 [2024-12-07 05:46:39.631091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.631424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.631434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.462 qpair failed and we were unable to recover it. 00:31:36.462 [2024-12-07 05:46:39.631758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.632052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.632062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.462 qpair failed and we were unable to recover it. 00:31:36.462 [2024-12-07 05:46:39.632364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.632566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.632576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.462 qpair failed and we were unable to recover it. 00:31:36.462 [2024-12-07 05:46:39.632896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.633194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.633205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.462 qpair failed and we were unable to recover it. 00:31:36.462 [2024-12-07 05:46:39.633562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.633753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.633762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.462 qpair failed and we were unable to recover it. 00:31:36.462 [2024-12-07 05:46:39.634054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.634414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.634423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.462 qpair failed and we were unable to recover it. 00:31:36.462 [2024-12-07 05:46:39.634703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.635028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.635038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.462 qpair failed and we were unable to recover it. 00:31:36.462 [2024-12-07 05:46:39.635259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.635575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.635585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.462 qpair failed and we were unable to recover it. 00:31:36.462 [2024-12-07 05:46:39.635891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.636183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.636194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.462 qpair failed and we were unable to recover it. 00:31:36.462 [2024-12-07 05:46:39.636389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.636662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.636674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.462 qpair failed and we were unable to recover it. 00:31:36.462 [2024-12-07 05:46:39.637002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.637315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.637325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.462 qpair failed and we were unable to recover it. 00:31:36.462 [2024-12-07 05:46:39.637653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.637963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.637973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.462 qpair failed and we were unable to recover it. 00:31:36.462 [2024-12-07 05:46:39.638285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.638575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.638585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.462 qpair failed and we were unable to recover it. 00:31:36.462 [2024-12-07 05:46:39.638910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.639099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.639109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.462 qpair failed and we were unable to recover it. 00:31:36.462 [2024-12-07 05:46:39.639442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.639755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.639766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.462 qpair failed and we were unable to recover it. 00:31:36.462 [2024-12-07 05:46:39.640077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.640414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.640424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.462 qpair failed and we were unable to recover it. 00:31:36.462 [2024-12-07 05:46:39.640725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.641019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.641028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.462 qpair failed and we were unable to recover it. 00:31:36.462 [2024-12-07 05:46:39.641328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.641624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.641634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.462 qpair failed and we were unable to recover it. 00:31:36.462 [2024-12-07 05:46:39.641959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.642271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.642281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.462 qpair failed and we were unable to recover it. 00:31:36.462 [2024-12-07 05:46:39.642471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.462 [2024-12-07 05:46:39.642655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.642665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.463 qpair failed and we were unable to recover it. 00:31:36.463 [2024-12-07 05:46:39.642969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.643199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.643208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.463 qpair failed and we were unable to recover it. 00:31:36.463 [2024-12-07 05:46:39.643514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.643817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.643827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.463 qpair failed and we were unable to recover it. 00:31:36.463 [2024-12-07 05:46:39.643992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.644266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.644276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.463 qpair failed and we were unable to recover it. 00:31:36.463 [2024-12-07 05:46:39.644638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.644928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.644937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.463 qpair failed and we were unable to recover it. 00:31:36.463 [2024-12-07 05:46:39.645255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.645590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.645599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.463 qpair failed and we were unable to recover it. 00:31:36.463 [2024-12-07 05:46:39.645904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.646207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.646216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.463 qpair failed and we were unable to recover it. 00:31:36.463 [2024-12-07 05:46:39.646506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.646778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.646787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.463 qpair failed and we were unable to recover it. 00:31:36.463 [2024-12-07 05:46:39.647078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.647407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.647416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.463 qpair failed and we were unable to recover it. 00:31:36.463 [2024-12-07 05:46:39.647720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.648014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.648024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.463 qpair failed and we were unable to recover it. 00:31:36.463 [2024-12-07 05:46:39.648316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.648629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.648638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.463 qpair failed and we were unable to recover it. 00:31:36.463 [2024-12-07 05:46:39.648919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.649218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.649227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.463 qpair failed and we were unable to recover it. 00:31:36.463 [2024-12-07 05:46:39.649538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.649713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.649723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.463 qpair failed and we were unable to recover it. 00:31:36.463 [2024-12-07 05:46:39.650021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.650335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.650345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.463 qpair failed and we were unable to recover it. 00:31:36.463 [2024-12-07 05:46:39.650622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.650939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.650949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.463 qpair failed and we were unable to recover it. 00:31:36.463 [2024-12-07 05:46:39.651245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.651431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.651440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.463 qpair failed and we were unable to recover it. 00:31:36.463 [2024-12-07 05:46:39.651657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.651967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.651976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.463 qpair failed and we were unable to recover it. 00:31:36.463 [2024-12-07 05:46:39.652291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.652589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.652598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.463 qpair failed and we were unable to recover it. 00:31:36.463 [2024-12-07 05:46:39.652902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.653090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.653101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.463 qpair failed and we were unable to recover it. 00:31:36.463 [2024-12-07 05:46:39.653296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.653663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.653672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.463 qpair failed and we were unable to recover it. 00:31:36.463 [2024-12-07 05:46:39.653952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.654251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.654260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.463 qpair failed and we were unable to recover it. 00:31:36.463 [2024-12-07 05:46:39.654564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.654847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.654856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.463 qpair failed and we were unable to recover it. 00:31:36.463 [2024-12-07 05:46:39.655160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.655489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.655498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.463 qpair failed and we were unable to recover it. 00:31:36.463 [2024-12-07 05:46:39.655780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.656110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.656120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.463 qpair failed and we were unable to recover it. 00:31:36.463 [2024-12-07 05:46:39.656429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.656762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.656772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.463 qpair failed and we were unable to recover it. 00:31:36.463 [2024-12-07 05:46:39.657081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.657486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.657495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.463 qpair failed and we were unable to recover it. 00:31:36.463 [2024-12-07 05:46:39.657840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.658143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.658152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.463 qpair failed and we were unable to recover it. 00:31:36.463 [2024-12-07 05:46:39.658347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.658702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.658711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.463 qpair failed and we were unable to recover it. 00:31:36.463 [2024-12-07 05:46:39.659111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.659426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.463 [2024-12-07 05:46:39.659435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.463 qpair failed and we were unable to recover it. 00:31:36.463 [2024-12-07 05:46:39.659758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.660070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.660081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.464 qpair failed and we were unable to recover it. 00:31:36.464 [2024-12-07 05:46:39.660270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.660601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.660611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.464 qpair failed and we were unable to recover it. 00:31:36.464 [2024-12-07 05:46:39.660921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.661243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.661252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.464 qpair failed and we were unable to recover it. 00:31:36.464 [2024-12-07 05:46:39.661545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.661862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.661872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.464 qpair failed and we were unable to recover it. 00:31:36.464 [2024-12-07 05:46:39.662034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.662336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.662346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.464 qpair failed and we were unable to recover it. 00:31:36.464 [2024-12-07 05:46:39.662648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.662977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.662986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.464 qpair failed and we were unable to recover it. 00:31:36.464 [2024-12-07 05:46:39.663268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.663453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.663462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.464 qpair failed and we were unable to recover it. 00:31:36.464 [2024-12-07 05:46:39.663769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.664087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.664097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.464 qpair failed and we were unable to recover it. 00:31:36.464 [2024-12-07 05:46:39.664499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.664802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.664811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.464 qpair failed and we were unable to recover it. 00:31:36.464 [2024-12-07 05:46:39.665137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.665444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.665454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.464 qpair failed and we were unable to recover it. 00:31:36.464 [2024-12-07 05:46:39.665759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.666074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.666084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.464 qpair failed and we were unable to recover it. 00:31:36.464 [2024-12-07 05:46:39.666409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.666582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.666592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.464 qpair failed and we were unable to recover it. 00:31:36.464 [2024-12-07 05:46:39.666851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.667105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.667117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.464 qpair failed and we were unable to recover it. 00:31:36.464 [2024-12-07 05:46:39.667427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.667740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.667750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.464 qpair failed and we were unable to recover it. 00:31:36.464 [2024-12-07 05:46:39.668022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.668339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.668349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.464 qpair failed and we were unable to recover it. 00:31:36.464 [2024-12-07 05:46:39.668562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.668863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.668872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.464 qpair failed and we were unable to recover it. 00:31:36.464 [2024-12-07 05:46:39.669175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.669507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.669516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.464 qpair failed and we were unable to recover it. 00:31:36.464 [2024-12-07 05:46:39.669820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.670125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.670135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.464 qpair failed and we were unable to recover it. 00:31:36.464 [2024-12-07 05:46:39.670460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.670804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.670814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.464 qpair failed and we were unable to recover it. 00:31:36.464 [2024-12-07 05:46:39.671102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.671393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.671402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.464 qpair failed and we were unable to recover it. 00:31:36.464 [2024-12-07 05:46:39.671719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.672128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.672137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.464 qpair failed and we were unable to recover it. 00:31:36.464 [2024-12-07 05:46:39.672378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.672669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.672679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.464 qpair failed and we were unable to recover it. 00:31:36.464 [2024-12-07 05:46:39.672963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.673249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.673259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.464 qpair failed and we were unable to recover it. 00:31:36.464 [2024-12-07 05:46:39.673566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.673875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.673884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.464 qpair failed and we were unable to recover it. 00:31:36.464 [2024-12-07 05:46:39.674168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.674493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.674502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.464 qpair failed and we were unable to recover it. 00:31:36.464 [2024-12-07 05:46:39.674783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.675075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.675085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.464 qpair failed and we were unable to recover it. 00:31:36.464 [2024-12-07 05:46:39.675389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.675702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.675712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.464 qpair failed and we were unable to recover it. 00:31:36.464 [2024-12-07 05:46:39.675990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.676306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.676316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.464 qpair failed and we were unable to recover it. 00:31:36.464 [2024-12-07 05:46:39.676631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.676988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.464 [2024-12-07 05:46:39.676997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.464 qpair failed and we were unable to recover it. 00:31:36.465 [2024-12-07 05:46:39.677371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.677656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.677665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.465 qpair failed and we were unable to recover it. 00:31:36.465 [2024-12-07 05:46:39.677963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.678246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.678256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.465 qpair failed and we were unable to recover it. 00:31:36.465 [2024-12-07 05:46:39.678557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.678877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.678887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.465 qpair failed and we were unable to recover it. 00:31:36.465 [2024-12-07 05:46:39.679180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.679482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.679492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.465 qpair failed and we were unable to recover it. 00:31:36.465 [2024-12-07 05:46:39.679774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.680090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.680100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.465 qpair failed and we were unable to recover it. 00:31:36.465 [2024-12-07 05:46:39.680404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.680706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.680715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.465 qpair failed and we were unable to recover it. 00:31:36.465 [2024-12-07 05:46:39.681018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.681331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.681340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.465 qpair failed and we were unable to recover it. 00:31:36.465 [2024-12-07 05:46:39.681621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.681825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.681834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.465 qpair failed and we were unable to recover it. 00:31:36.465 [2024-12-07 05:46:39.682103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.682287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.682297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.465 qpair failed and we were unable to recover it. 00:31:36.465 [2024-12-07 05:46:39.682606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.682923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.682932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.465 qpair failed and we were unable to recover it. 00:31:36.465 [2024-12-07 05:46:39.683245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.683527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.683538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.465 qpair failed and we were unable to recover it. 00:31:36.465 [2024-12-07 05:46:39.683851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.684171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.684181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.465 qpair failed and we were unable to recover it. 00:31:36.465 [2024-12-07 05:46:39.684509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.684700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.684710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.465 qpair failed and we were unable to recover it. 00:31:36.465 [2024-12-07 05:46:39.684921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.685149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.685160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.465 qpair failed and we were unable to recover it. 00:31:36.465 [2024-12-07 05:46:39.685449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.685735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.685745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.465 qpair failed and we were unable to recover it. 00:31:36.465 [2024-12-07 05:46:39.686155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.686545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.686554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.465 qpair failed and we were unable to recover it. 00:31:36.465 [2024-12-07 05:46:39.686838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.687155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.687165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.465 qpair failed and we were unable to recover it. 00:31:36.465 [2024-12-07 05:46:39.687483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.687777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.687786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.465 qpair failed and we were unable to recover it. 00:31:36.465 [2024-12-07 05:46:39.688098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.688413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.465 [2024-12-07 05:46:39.688422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.465 qpair failed and we were unable to recover it. 00:31:36.735 [2024-12-07 05:46:39.688795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.735 [2024-12-07 05:46:39.689104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.735 [2024-12-07 05:46:39.689114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.735 qpair failed and we were unable to recover it. 00:31:36.735 [2024-12-07 05:46:39.689423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.735 [2024-12-07 05:46:39.689734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.735 [2024-12-07 05:46:39.689744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.735 qpair failed and we were unable to recover it. 00:31:36.735 [2024-12-07 05:46:39.690037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.735 [2024-12-07 05:46:39.690340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.735 [2024-12-07 05:46:39.690350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.735 qpair failed and we were unable to recover it. 00:31:36.735 [2024-12-07 05:46:39.690638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.735 [2024-12-07 05:46:39.690849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.735 [2024-12-07 05:46:39.690858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.735 qpair failed and we were unable to recover it. 00:31:36.735 [2024-12-07 05:46:39.691178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.735 [2024-12-07 05:46:39.691517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.735 [2024-12-07 05:46:39.691527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.735 qpair failed and we were unable to recover it. 00:31:36.735 [2024-12-07 05:46:39.691832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.735 [2024-12-07 05:46:39.692162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.735 [2024-12-07 05:46:39.692174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.735 qpair failed and we were unable to recover it. 00:31:36.735 [2024-12-07 05:46:39.692491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.735 [2024-12-07 05:46:39.692793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.692802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.736 qpair failed and we were unable to recover it. 00:31:36.736 [2024-12-07 05:46:39.692974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.693341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.693350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.736 qpair failed and we were unable to recover it. 00:31:36.736 [2024-12-07 05:46:39.693656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.693802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.693812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.736 qpair failed and we were unable to recover it. 00:31:36.736 [2024-12-07 05:46:39.694082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.694422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.694440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.736 qpair failed and we were unable to recover it. 00:31:36.736 [2024-12-07 05:46:39.694737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.695035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.695045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.736 qpair failed and we were unable to recover it. 00:31:36.736 [2024-12-07 05:46:39.695425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.695642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.695651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.736 qpair failed and we were unable to recover it. 00:31:36.736 [2024-12-07 05:46:39.695980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.696287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.696297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.736 qpair failed and we were unable to recover it. 00:31:36.736 [2024-12-07 05:46:39.696610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.696899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.696908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.736 qpair failed and we were unable to recover it. 00:31:36.736 [2024-12-07 05:46:39.697179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.697360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.697370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.736 qpair failed and we were unable to recover it. 00:31:36.736 [2024-12-07 05:46:39.697696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.698016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.698028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.736 qpair failed and we were unable to recover it. 00:31:36.736 [2024-12-07 05:46:39.698298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.698617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.698628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.736 qpair failed and we were unable to recover it. 00:31:36.736 [2024-12-07 05:46:39.698791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.698970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.698980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.736 qpair failed and we were unable to recover it. 00:31:36.736 [2024-12-07 05:46:39.699282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.699600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.699609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.736 qpair failed and we were unable to recover it. 00:31:36.736 [2024-12-07 05:46:39.699889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.700173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.700183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.736 qpair failed and we were unable to recover it. 00:31:36.736 [2024-12-07 05:46:39.700473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.700772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.700781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.736 qpair failed and we were unable to recover it. 00:31:36.736 [2024-12-07 05:46:39.701147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.701424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.701433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.736 qpair failed and we were unable to recover it. 00:31:36.736 [2024-12-07 05:46:39.701742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.702033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.702043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.736 qpair failed and we were unable to recover it. 00:31:36.736 [2024-12-07 05:46:39.702325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.702633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.702642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.736 qpair failed and we were unable to recover it. 00:31:36.736 [2024-12-07 05:46:39.702923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.703227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.703237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.736 qpair failed and we were unable to recover it. 00:31:36.736 [2024-12-07 05:46:39.703465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.703792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.703801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.736 qpair failed and we were unable to recover it. 00:31:36.736 [2024-12-07 05:46:39.704069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.704370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.704379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.736 qpair failed and we were unable to recover it. 00:31:36.736 [2024-12-07 05:46:39.704780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.705044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.705054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.736 qpair failed and we were unable to recover it. 00:31:36.736 [2024-12-07 05:46:39.705374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.705675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.705686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.736 qpair failed and we were unable to recover it. 00:31:36.736 [2024-12-07 05:46:39.706006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.706293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.706302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.736 qpair failed and we were unable to recover it. 00:31:36.736 [2024-12-07 05:46:39.706579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.706898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.706907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.736 qpair failed and we were unable to recover it. 00:31:36.736 [2024-12-07 05:46:39.707217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.707545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.707554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.736 qpair failed and we were unable to recover it. 00:31:36.736 [2024-12-07 05:46:39.707860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.708156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.708166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.736 qpair failed and we were unable to recover it. 00:31:36.736 [2024-12-07 05:46:39.708443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.708733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.708743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.736 qpair failed and we were unable to recover it. 00:31:36.736 [2024-12-07 05:46:39.709051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.709342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.736 [2024-12-07 05:46:39.709351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.737 qpair failed and we were unable to recover it. 00:31:36.737 [2024-12-07 05:46:39.709729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.710023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.710032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.737 qpair failed and we were unable to recover it. 00:31:36.737 [2024-12-07 05:46:39.710207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.710493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.710502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.737 qpair failed and we were unable to recover it. 00:31:36.737 [2024-12-07 05:46:39.710824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.711118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.711128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.737 qpair failed and we were unable to recover it. 00:31:36.737 [2024-12-07 05:46:39.711414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.711612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.711621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.737 qpair failed and we were unable to recover it. 00:31:36.737 [2024-12-07 05:46:39.711812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.712075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.712084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.737 qpair failed and we were unable to recover it. 00:31:36.737 [2024-12-07 05:46:39.712373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.712564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.712574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.737 qpair failed and we were unable to recover it. 00:31:36.737 [2024-12-07 05:46:39.712926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.713246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.713255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.737 qpair failed and we were unable to recover it. 00:31:36.737 [2024-12-07 05:46:39.713545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.713865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.713874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.737 qpair failed and we were unable to recover it. 00:31:36.737 [2024-12-07 05:46:39.714154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.714489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.714498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.737 qpair failed and we were unable to recover it. 00:31:36.737 [2024-12-07 05:46:39.714881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.715148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.715158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.737 qpair failed and we were unable to recover it. 00:31:36.737 [2024-12-07 05:46:39.715446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.715765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.715774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.737 qpair failed and we were unable to recover it. 00:31:36.737 [2024-12-07 05:46:39.716077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.716368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.716377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.737 qpair failed and we were unable to recover it. 00:31:36.737 [2024-12-07 05:46:39.716593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.716976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.716986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.737 qpair failed and we were unable to recover it. 00:31:36.737 [2024-12-07 05:46:39.717161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.717489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.717498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.737 qpair failed and we were unable to recover it. 00:31:36.737 [2024-12-07 05:46:39.717687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.717952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.717962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.737 qpair failed and we were unable to recover it. 00:31:36.737 [2024-12-07 05:46:39.718254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.718574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.718583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.737 qpair failed and we were unable to recover it. 00:31:36.737 [2024-12-07 05:46:39.718950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.719140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.719150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.737 qpair failed and we were unable to recover it. 00:31:36.737 [2024-12-07 05:46:39.719475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.719810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.719820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.737 qpair failed and we were unable to recover it. 00:31:36.737 [2024-12-07 05:46:39.720127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.720429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.720438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.737 qpair failed and we were unable to recover it. 00:31:36.737 [2024-12-07 05:46:39.720734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.720930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.720940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.737 qpair failed and we were unable to recover it. 00:31:36.737 [2024-12-07 05:46:39.721247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.721541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.721550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.737 qpair failed and we were unable to recover it. 00:31:36.737 [2024-12-07 05:46:39.721861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.722075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.722090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.737 qpair failed and we were unable to recover it. 00:31:36.737 [2024-12-07 05:46:39.722254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.722532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.722541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.737 qpair failed and we were unable to recover it. 00:31:36.737 [2024-12-07 05:46:39.722825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.723138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.723148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.737 qpair failed and we were unable to recover it. 00:31:36.737 [2024-12-07 05:46:39.723450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.723761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.723770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.737 qpair failed and we were unable to recover it. 00:31:36.737 [2024-12-07 05:46:39.724056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.724391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.724400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.737 qpair failed and we were unable to recover it. 00:31:36.737 [2024-12-07 05:46:39.724683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.724961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.724970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.737 qpair failed and we were unable to recover it. 00:31:36.737 [2024-12-07 05:46:39.725262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.725543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.725552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.737 qpair failed and we were unable to recover it. 00:31:36.737 [2024-12-07 05:46:39.725749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.737 [2024-12-07 05:46:39.726050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.726070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.738 qpair failed and we were unable to recover it. 00:31:36.738 [2024-12-07 05:46:39.726375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.726678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.726687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.738 qpair failed and we were unable to recover it. 00:31:36.738 [2024-12-07 05:46:39.726894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.727198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.727208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.738 qpair failed and we were unable to recover it. 00:31:36.738 [2024-12-07 05:46:39.727492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.727805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.727815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.738 qpair failed and we were unable to recover it. 00:31:36.738 [2024-12-07 05:46:39.728113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.728405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.728415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.738 qpair failed and we were unable to recover it. 00:31:36.738 [2024-12-07 05:46:39.728752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.729055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.729064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.738 qpair failed and we were unable to recover it. 00:31:36.738 [2024-12-07 05:46:39.729368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.729675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.729684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.738 qpair failed and we were unable to recover it. 00:31:36.738 [2024-12-07 05:46:39.729964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.730293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.730303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.738 qpair failed and we were unable to recover it. 00:31:36.738 [2024-12-07 05:46:39.730583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.730901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.730910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.738 qpair failed and we were unable to recover it. 00:31:36.738 [2024-12-07 05:46:39.731279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.731595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.731605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.738 qpair failed and we were unable to recover it. 00:31:36.738 [2024-12-07 05:46:39.731922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.732226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.732236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.738 qpair failed and we were unable to recover it. 00:31:36.738 [2024-12-07 05:46:39.732511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.732826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.732835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.738 qpair failed and we were unable to recover it. 00:31:36.738 [2024-12-07 05:46:39.733167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.733469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.733480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.738 qpair failed and we were unable to recover it. 00:31:36.738 [2024-12-07 05:46:39.733787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.734102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.734112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.738 qpair failed and we were unable to recover it. 00:31:36.738 [2024-12-07 05:46:39.734428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.734716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.734725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.738 qpair failed and we were unable to recover it. 00:31:36.738 [2024-12-07 05:46:39.735163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.735444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.735453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.738 qpair failed and we were unable to recover it. 00:31:36.738 [2024-12-07 05:46:39.735758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.736084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.736094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.738 qpair failed and we were unable to recover it. 00:31:36.738 [2024-12-07 05:46:39.736401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.736691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.736700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.738 qpair failed and we were unable to recover it. 00:31:36.738 [2024-12-07 05:46:39.736984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.737311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.737321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.738 qpair failed and we were unable to recover it. 00:31:36.738 [2024-12-07 05:46:39.737617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.737942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.737952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.738 qpair failed and we were unable to recover it. 00:31:36.738 [2024-12-07 05:46:39.738262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.738549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.738558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.738 qpair failed and we were unable to recover it. 00:31:36.738 [2024-12-07 05:46:39.738733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.739036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.739047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.738 qpair failed and we were unable to recover it. 00:31:36.738 [2024-12-07 05:46:39.739342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.739631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.739640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.738 qpair failed and we were unable to recover it. 00:31:36.738 [2024-12-07 05:46:39.739939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.740258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.740268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.738 qpair failed and we were unable to recover it. 00:31:36.738 [2024-12-07 05:46:39.740556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.740875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.740884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.738 qpair failed and we were unable to recover it. 00:31:36.738 [2024-12-07 05:46:39.741174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.741450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.741459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.738 qpair failed and we were unable to recover it. 00:31:36.738 [2024-12-07 05:46:39.741764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.742053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.742063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.738 qpair failed and we were unable to recover it. 00:31:36.738 [2024-12-07 05:46:39.742371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.742671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.742680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.738 qpair failed and we were unable to recover it. 00:31:36.738 [2024-12-07 05:46:39.742982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.743350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.738 [2024-12-07 05:46:39.743359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.738 qpair failed and we were unable to recover it. 00:31:36.738 [2024-12-07 05:46:39.743643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.743842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.743851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.739 qpair failed and we were unable to recover it. 00:31:36.739 [2024-12-07 05:46:39.744157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.744352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.744362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.739 qpair failed and we were unable to recover it. 00:31:36.739 [2024-12-07 05:46:39.744676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.744987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.744997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.739 qpair failed and we were unable to recover it. 00:31:36.739 [2024-12-07 05:46:39.745293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.745584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.745594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.739 qpair failed and we were unable to recover it. 00:31:36.739 [2024-12-07 05:46:39.745924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.746243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.746253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.739 qpair failed and we were unable to recover it. 00:31:36.739 [2024-12-07 05:46:39.746607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.746896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.746905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.739 qpair failed and we were unable to recover it. 00:31:36.739 [2024-12-07 05:46:39.747186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.747506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.747515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.739 qpair failed and we were unable to recover it. 00:31:36.739 [2024-12-07 05:46:39.747816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.748117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.748126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.739 qpair failed and we were unable to recover it. 00:31:36.739 [2024-12-07 05:46:39.748429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.748743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.748752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.739 qpair failed and we were unable to recover it. 00:31:36.739 [2024-12-07 05:46:39.749058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.749376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.749385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.739 qpair failed and we were unable to recover it. 00:31:36.739 [2024-12-07 05:46:39.749547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.749859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.749869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.739 qpair failed and we were unable to recover it. 00:31:36.739 [2024-12-07 05:46:39.750179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.750512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.750521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.739 qpair failed and we were unable to recover it. 00:31:36.739 [2024-12-07 05:46:39.750853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.750998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.751009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.739 qpair failed and we were unable to recover it. 00:31:36.739 [2024-12-07 05:46:39.751303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.751612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.751622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.739 qpair failed and we were unable to recover it. 00:31:36.739 [2024-12-07 05:46:39.751922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.752225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.752234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.739 qpair failed and we were unable to recover it. 00:31:36.739 [2024-12-07 05:46:39.752421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.752774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.752786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.739 qpair failed and we were unable to recover it. 00:31:36.739 [2024-12-07 05:46:39.753062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.753259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.753269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.739 qpair failed and we were unable to recover it. 00:31:36.739 [2024-12-07 05:46:39.753539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.753854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.753863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.739 qpair failed and we were unable to recover it. 00:31:36.739 [2024-12-07 05:46:39.754170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.754447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.754457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.739 qpair failed and we were unable to recover it. 00:31:36.739 [2024-12-07 05:46:39.754650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.754842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.754851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.739 qpair failed and we were unable to recover it. 00:31:36.739 [2024-12-07 05:46:39.755137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.755456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.755465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.739 qpair failed and we were unable to recover it. 00:31:36.739 [2024-12-07 05:46:39.755738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.756063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.756073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.739 qpair failed and we were unable to recover it. 00:31:36.739 [2024-12-07 05:46:39.756370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.756680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.756689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.739 qpair failed and we were unable to recover it. 00:31:36.739 [2024-12-07 05:46:39.757018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.757187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.757197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.739 qpair failed and we were unable to recover it. 00:31:36.739 [2024-12-07 05:46:39.757495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.757814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.757823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.739 qpair failed and we were unable to recover it. 00:31:36.739 [2024-12-07 05:46:39.758111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.758434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.758443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.739 qpair failed and we were unable to recover it. 00:31:36.739 [2024-12-07 05:46:39.758641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.758963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.758972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.739 qpair failed and we were unable to recover it. 00:31:36.739 [2024-12-07 05:46:39.759262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.759552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.759561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.739 qpair failed and we were unable to recover it. 00:31:36.739 [2024-12-07 05:46:39.759721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.760078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.739 [2024-12-07 05:46:39.760088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.739 qpair failed and we were unable to recover it. 00:31:36.740 [2024-12-07 05:46:39.760397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.760725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.760734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.740 qpair failed and we were unable to recover it. 00:31:36.740 [2024-12-07 05:46:39.761038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.761356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.761365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.740 qpair failed and we were unable to recover it. 00:31:36.740 [2024-12-07 05:46:39.761673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.761947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.761956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.740 qpair failed and we were unable to recover it. 00:31:36.740 [2024-12-07 05:46:39.762260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.762585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.762594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.740 qpair failed and we were unable to recover it. 00:31:36.740 [2024-12-07 05:46:39.762898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.763228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.763238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.740 qpair failed and we were unable to recover it. 00:31:36.740 [2024-12-07 05:46:39.763516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.763830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.763840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.740 qpair failed and we were unable to recover it. 00:31:36.740 [2024-12-07 05:46:39.764131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.764452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.764462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.740 qpair failed and we were unable to recover it. 00:31:36.740 [2024-12-07 05:46:39.764778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.764936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.764946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.740 qpair failed and we were unable to recover it. 00:31:36.740 [2024-12-07 05:46:39.765218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.765519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.765528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.740 qpair failed and we were unable to recover it. 00:31:36.740 [2024-12-07 05:46:39.765835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.766149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.766160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.740 qpair failed and we were unable to recover it. 00:31:36.740 [2024-12-07 05:46:39.766521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.766833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.766843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.740 qpair failed and we were unable to recover it. 00:31:36.740 [2024-12-07 05:46:39.767138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.767483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.767493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.740 qpair failed and we were unable to recover it. 00:31:36.740 [2024-12-07 05:46:39.767678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.768000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.768017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.740 qpair failed and we were unable to recover it. 00:31:36.740 [2024-12-07 05:46:39.768309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.768623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.768633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.740 qpair failed and we were unable to recover it. 00:31:36.740 [2024-12-07 05:46:39.768910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.769226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.769235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.740 qpair failed and we were unable to recover it. 00:31:36.740 [2024-12-07 05:46:39.769494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.769821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.769830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.740 qpair failed and we were unable to recover it. 00:31:36.740 [2024-12-07 05:46:39.770132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.770450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.770459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.740 qpair failed and we were unable to recover it. 00:31:36.740 [2024-12-07 05:46:39.770746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.771062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.771071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.740 qpair failed and we were unable to recover it. 00:31:36.740 [2024-12-07 05:46:39.771384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.771678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.771688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.740 qpair failed and we were unable to recover it. 00:31:36.740 [2024-12-07 05:46:39.771967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.772284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.772294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.740 qpair failed and we were unable to recover it. 00:31:36.740 [2024-12-07 05:46:39.772569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.772894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.772904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.740 qpair failed and we were unable to recover it. 00:31:36.740 [2024-12-07 05:46:39.773105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.773407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.773417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.740 qpair failed and we were unable to recover it. 00:31:36.740 [2024-12-07 05:46:39.773626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.773974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.773984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.740 qpair failed and we were unable to recover it. 00:31:36.740 [2024-12-07 05:46:39.774292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.774606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.740 [2024-12-07 05:46:39.774616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.740 qpair failed and we were unable to recover it. 00:31:36.740 [2024-12-07 05:46:39.774931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.775231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.775240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.741 qpair failed and we were unable to recover it. 00:31:36.741 [2024-12-07 05:46:39.775542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.775869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.775878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.741 qpair failed and we were unable to recover it. 00:31:36.741 [2024-12-07 05:46:39.776159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.776518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.776527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.741 qpair failed and we were unable to recover it. 00:31:36.741 [2024-12-07 05:46:39.776809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.777117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.777129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.741 qpair failed and we were unable to recover it. 00:31:36.741 [2024-12-07 05:46:39.777413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.777707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.777716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.741 qpair failed and we were unable to recover it. 00:31:36.741 [2024-12-07 05:46:39.778042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.778353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.778362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.741 qpair failed and we were unable to recover it. 00:31:36.741 [2024-12-07 05:46:39.778665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.778955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.778964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.741 qpair failed and we were unable to recover it. 00:31:36.741 [2024-12-07 05:46:39.779298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.779649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.779658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.741 qpair failed and we were unable to recover it. 00:31:36.741 [2024-12-07 05:46:39.779937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.780260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.780270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.741 qpair failed and we were unable to recover it. 00:31:36.741 [2024-12-07 05:46:39.780575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.780867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.780876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.741 qpair failed and we were unable to recover it. 00:31:36.741 [2024-12-07 05:46:39.781271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.781559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.781569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.741 qpair failed and we were unable to recover it. 00:31:36.741 [2024-12-07 05:46:39.781855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.782167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.782176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.741 qpair failed and we were unable to recover it. 00:31:36.741 [2024-12-07 05:46:39.782353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.782721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.782731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.741 qpair failed and we were unable to recover it. 00:31:36.741 [2024-12-07 05:46:39.783017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.783300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.783312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.741 qpair failed and we were unable to recover it. 00:31:36.741 [2024-12-07 05:46:39.783699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.784024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.784034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.741 qpair failed and we were unable to recover it. 00:31:36.741 [2024-12-07 05:46:39.784341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.784659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.784668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.741 qpair failed and we were unable to recover it. 00:31:36.741 [2024-12-07 05:46:39.784941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.785296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.785306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.741 qpair failed and we were unable to recover it. 00:31:36.741 [2024-12-07 05:46:39.785594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.785911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.785920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.741 qpair failed and we were unable to recover it. 00:31:36.741 [2024-12-07 05:46:39.786230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.786499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.786508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.741 qpair failed and we were unable to recover it. 00:31:36.741 [2024-12-07 05:46:39.786784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.786946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.786956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.741 qpair failed and we were unable to recover it. 00:31:36.741 [2024-12-07 05:46:39.787261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.787605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.787614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.741 qpair failed and we were unable to recover it. 00:31:36.741 [2024-12-07 05:46:39.787909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.788287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.788297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.741 qpair failed and we were unable to recover it. 00:31:36.741 [2024-12-07 05:46:39.788617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.788930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.788940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.741 qpair failed and we were unable to recover it. 00:31:36.741 [2024-12-07 05:46:39.789146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.789447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.789456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.741 qpair failed and we were unable to recover it. 00:31:36.741 [2024-12-07 05:46:39.789731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.790053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.790063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.741 qpair failed and we were unable to recover it. 00:31:36.741 [2024-12-07 05:46:39.790377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.790669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.790679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.741 qpair failed and we were unable to recover it. 00:31:36.741 [2024-12-07 05:46:39.790868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.791180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.791190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.741 qpair failed and we were unable to recover it. 00:31:36.741 [2024-12-07 05:46:39.791495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.791688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.791697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.741 qpair failed and we were unable to recover it. 00:31:36.741 [2024-12-07 05:46:39.792019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.792331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.741 [2024-12-07 05:46:39.792341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.741 qpair failed and we were unable to recover it. 00:31:36.742 [2024-12-07 05:46:39.792629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.792905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.792914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.742 qpair failed and we were unable to recover it. 00:31:36.742 [2024-12-07 05:46:39.793241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.793415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.793425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.742 qpair failed and we were unable to recover it. 00:31:36.742 [2024-12-07 05:46:39.793766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.793980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.793989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.742 qpair failed and we were unable to recover it. 00:31:36.742 [2024-12-07 05:46:39.794299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.794586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.794596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.742 qpair failed and we were unable to recover it. 00:31:36.742 [2024-12-07 05:46:39.794900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.795187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.795197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.742 qpair failed and we were unable to recover it. 00:31:36.742 [2024-12-07 05:46:39.795510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.795822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.795832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.742 qpair failed and we were unable to recover it. 00:31:36.742 [2024-12-07 05:46:39.796113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.796431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.796441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.742 qpair failed and we were unable to recover it. 00:31:36.742 [2024-12-07 05:46:39.796748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.796958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.796967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.742 qpair failed and we were unable to recover it. 00:31:36.742 [2024-12-07 05:46:39.797326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.797646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.797656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.742 qpair failed and we were unable to recover it. 00:31:36.742 [2024-12-07 05:46:39.797937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.798243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.798252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.742 qpair failed and we were unable to recover it. 00:31:36.742 [2024-12-07 05:46:39.798439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.798721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.798730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.742 qpair failed and we were unable to recover it. 00:31:36.742 [2024-12-07 05:46:39.799015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.799363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.799372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.742 qpair failed and we were unable to recover it. 00:31:36.742 [2024-12-07 05:46:39.799576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.799795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.799805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.742 qpair failed and we were unable to recover it. 00:31:36.742 [2024-12-07 05:46:39.800126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.800511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.800521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.742 qpair failed and we were unable to recover it. 00:31:36.742 [2024-12-07 05:46:39.800820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.801158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.801168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.742 qpair failed and we were unable to recover it. 00:31:36.742 [2024-12-07 05:46:39.801468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.801661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.801670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.742 qpair failed and we were unable to recover it. 00:31:36.742 [2024-12-07 05:46:39.801976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.802313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.802322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.742 qpair failed and we were unable to recover it. 00:31:36.742 [2024-12-07 05:46:39.802721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.803020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.803030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.742 qpair failed and we were unable to recover it. 00:31:36.742 [2024-12-07 05:46:39.803358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.803662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.803672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.742 qpair failed and we were unable to recover it. 00:31:36.742 [2024-12-07 05:46:39.803882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.804155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.804164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.742 qpair failed and we were unable to recover it. 00:31:36.742 [2024-12-07 05:46:39.804362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.804551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.804560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.742 qpair failed and we were unable to recover it. 00:31:36.742 [2024-12-07 05:46:39.804747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.805047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.805057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.742 qpair failed and we were unable to recover it. 00:31:36.742 [2024-12-07 05:46:39.805269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.805482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.805491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.742 qpair failed and we were unable to recover it. 00:31:36.742 [2024-12-07 05:46:39.805759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.806092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.806102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.742 qpair failed and we were unable to recover it. 00:31:36.742 [2024-12-07 05:46:39.806392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.806674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.806683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.742 qpair failed and we were unable to recover it. 00:31:36.742 [2024-12-07 05:46:39.806967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.807311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.807323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.742 qpair failed and we were unable to recover it. 00:31:36.742 [2024-12-07 05:46:39.807611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.807976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.807986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.742 qpair failed and we were unable to recover it. 00:31:36.742 [2024-12-07 05:46:39.808329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.808511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.808520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.742 qpair failed and we were unable to recover it. 00:31:36.742 [2024-12-07 05:46:39.808816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.809107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.742 [2024-12-07 05:46:39.809117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.743 qpair failed and we were unable to recover it. 00:31:36.743 [2024-12-07 05:46:39.809438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.809751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.809761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.743 qpair failed and we were unable to recover it. 00:31:36.743 [2024-12-07 05:46:39.809935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.810226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.810235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.743 qpair failed and we were unable to recover it. 00:31:36.743 [2024-12-07 05:46:39.810551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.810785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.810795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.743 qpair failed and we were unable to recover it. 00:31:36.743 [2024-12-07 05:46:39.810981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.811327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.811337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.743 qpair failed and we were unable to recover it. 00:31:36.743 [2024-12-07 05:46:39.811620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.811930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.811940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.743 qpair failed and we were unable to recover it. 00:31:36.743 [2024-12-07 05:46:39.812109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.812450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.812461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.743 qpair failed and we were unable to recover it. 00:31:36.743 [2024-12-07 05:46:39.812768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.812970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.812980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.743 qpair failed and we were unable to recover it. 00:31:36.743 [2024-12-07 05:46:39.813303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.813643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.813653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.743 qpair failed and we were unable to recover it. 00:31:36.743 [2024-12-07 05:46:39.813965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.814258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.814269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.743 qpair failed and we were unable to recover it. 00:31:36.743 [2024-12-07 05:46:39.814430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.814619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.814630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.743 qpair failed and we were unable to recover it. 00:31:36.743 [2024-12-07 05:46:39.814954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.815276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.815287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.743 qpair failed and we were unable to recover it. 00:31:36.743 [2024-12-07 05:46:39.815483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.815803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.815813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.743 qpair failed and we were unable to recover it. 00:31:36.743 [2024-12-07 05:46:39.816115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.816493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.816503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.743 qpair failed and we were unable to recover it. 00:31:36.743 [2024-12-07 05:46:39.816836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.817130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.817139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.743 qpair failed and we were unable to recover it. 00:31:36.743 [2024-12-07 05:46:39.817398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.817585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.817595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.743 qpair failed and we were unable to recover it. 00:31:36.743 [2024-12-07 05:46:39.817868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.818201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.818211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.743 qpair failed and we were unable to recover it. 00:31:36.743 [2024-12-07 05:46:39.818489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.818822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.818831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.743 qpair failed and we were unable to recover it. 00:31:36.743 [2024-12-07 05:46:39.819126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.819399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.819408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.743 qpair failed and we were unable to recover it. 00:31:36.743 [2024-12-07 05:46:39.819713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.820042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.820052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.743 qpair failed and we were unable to recover it. 00:31:36.743 [2024-12-07 05:46:39.820221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.820276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.820286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.743 qpair failed and we were unable to recover it. 00:31:36.743 [2024-12-07 05:46:39.820447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.820719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.820729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.743 qpair failed and we were unable to recover it. 00:31:36.743 [2024-12-07 05:46:39.821018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.821453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.821463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.743 qpair failed and we were unable to recover it. 00:31:36.743 [2024-12-07 05:46:39.821812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.822138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.822147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.743 qpair failed and we were unable to recover it. 00:31:36.743 [2024-12-07 05:46:39.822428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.822746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.822756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.743 qpair failed and we were unable to recover it. 00:31:36.743 [2024-12-07 05:46:39.823063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.823390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.823399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.743 qpair failed and we were unable to recover it. 00:31:36.743 [2024-12-07 05:46:39.823689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.823932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.823942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.743 qpair failed and we were unable to recover it. 00:31:36.743 [2024-12-07 05:46:39.824116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.824457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.824467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.743 qpair failed and we were unable to recover it. 00:31:36.743 [2024-12-07 05:46:39.824766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.824959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.824969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.743 qpair failed and we were unable to recover it. 00:31:36.743 [2024-12-07 05:46:39.825257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.743 [2024-12-07 05:46:39.825569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.825579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.744 qpair failed and we were unable to recover it. 00:31:36.744 [2024-12-07 05:46:39.825762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.826085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.826096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.744 qpair failed and we were unable to recover it. 00:31:36.744 [2024-12-07 05:46:39.826416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.826763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.826773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.744 qpair failed and we were unable to recover it. 00:31:36.744 [2024-12-07 05:46:39.826985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.827267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.827277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.744 qpair failed and we were unable to recover it. 00:31:36.744 [2024-12-07 05:46:39.827596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.827825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.827834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.744 qpair failed and we were unable to recover it. 00:31:36.744 [2024-12-07 05:46:39.828021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.828341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.828350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.744 qpair failed and we were unable to recover it. 00:31:36.744 [2024-12-07 05:46:39.828544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.828831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.828840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.744 qpair failed and we were unable to recover it. 00:31:36.744 [2024-12-07 05:46:39.829033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.829264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.829274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.744 qpair failed and we were unable to recover it. 00:31:36.744 [2024-12-07 05:46:39.829475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.829661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.829671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.744 qpair failed and we were unable to recover it. 00:31:36.744 [2024-12-07 05:46:39.829842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.830141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.830151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.744 qpair failed and we were unable to recover it. 00:31:36.744 [2024-12-07 05:46:39.830543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.830834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.830844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.744 qpair failed and we were unable to recover it. 00:31:36.744 [2024-12-07 05:46:39.831144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.831435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.831445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.744 qpair failed and we were unable to recover it. 00:31:36.744 [2024-12-07 05:46:39.831628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.832007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.832025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.744 qpair failed and we were unable to recover it. 00:31:36.744 [2024-12-07 05:46:39.832315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.832629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.832638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.744 qpair failed and we were unable to recover it. 00:31:36.744 [2024-12-07 05:46:39.832936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.833256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.833265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.744 qpair failed and we were unable to recover it. 00:31:36.744 [2024-12-07 05:46:39.833618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.833894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.833903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.744 qpair failed and we were unable to recover it. 00:31:36.744 [2024-12-07 05:46:39.834094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.834419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.834428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.744 qpair failed and we were unable to recover it. 00:31:36.744 [2024-12-07 05:46:39.834623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.834944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.834953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.744 qpair failed and we were unable to recover it. 00:31:36.744 [2024-12-07 05:46:39.835298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.835631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.835640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.744 qpair failed and we were unable to recover it. 00:31:36.744 [2024-12-07 05:46:39.836009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.836324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.836335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.744 qpair failed and we were unable to recover it. 00:31:36.744 [2024-12-07 05:46:39.836626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.836968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.836977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.744 qpair failed and we were unable to recover it. 00:31:36.744 [2024-12-07 05:46:39.837267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.837590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.837599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.744 qpair failed and we were unable to recover it. 00:31:36.744 [2024-12-07 05:46:39.837750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.837933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.837942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.744 qpair failed and we were unable to recover it. 00:31:36.744 [2024-12-07 05:46:39.838254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.838595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.838605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.744 qpair failed and we were unable to recover it. 00:31:36.744 [2024-12-07 05:46:39.838797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.839087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.839097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.744 qpair failed and we were unable to recover it. 00:31:36.744 [2024-12-07 05:46:39.839395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.839720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.839729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.744 qpair failed and we were unable to recover it. 00:31:36.744 [2024-12-07 05:46:39.839917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.744 [2024-12-07 05:46:39.840187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.840197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.745 qpair failed and we were unable to recover it. 00:31:36.745 [2024-12-07 05:46:39.840414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.840502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.840511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.745 qpair failed and we were unable to recover it. 00:31:36.745 [2024-12-07 05:46:39.840805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.841119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.841128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.745 qpair failed and we were unable to recover it. 00:31:36.745 [2024-12-07 05:46:39.841386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.841593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.841602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.745 qpair failed and we were unable to recover it. 00:31:36.745 [2024-12-07 05:46:39.841902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.842188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.842198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.745 qpair failed and we were unable to recover it. 00:31:36.745 [2024-12-07 05:46:39.842572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.842880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.842889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.745 qpair failed and we were unable to recover it. 00:31:36.745 [2024-12-07 05:46:39.843078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.843349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.843360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.745 qpair failed and we were unable to recover it. 00:31:36.745 [2024-12-07 05:46:39.843678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.843992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.844002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.745 qpair failed and we were unable to recover it. 00:31:36.745 [2024-12-07 05:46:39.844318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.844517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.844526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.745 qpair failed and we were unable to recover it. 00:31:36.745 [2024-12-07 05:46:39.844939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.845235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.845244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.745 qpair failed and we were unable to recover it. 00:31:36.745 [2024-12-07 05:46:39.845405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.845695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.845705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.745 qpair failed and we were unable to recover it. 00:31:36.745 [2024-12-07 05:46:39.846019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.846316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.846326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.745 qpair failed and we were unable to recover it. 00:31:36.745 [2024-12-07 05:46:39.846651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.846809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.846819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.745 qpair failed and we were unable to recover it. 00:31:36.745 [2024-12-07 05:46:39.847040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.847349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.847358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.745 qpair failed and we were unable to recover it. 00:31:36.745 [2024-12-07 05:46:39.847678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.847976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.847986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.745 qpair failed and we were unable to recover it. 00:31:36.745 [2024-12-07 05:46:39.848288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.848428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.848438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.745 qpair failed and we were unable to recover it. 00:31:36.745 [2024-12-07 05:46:39.848742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.849050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.849059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.745 qpair failed and we were unable to recover it. 00:31:36.745 [2024-12-07 05:46:39.849381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.849702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.849711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.745 qpair failed and we were unable to recover it. 00:31:36.745 [2024-12-07 05:46:39.849903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.850184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.850193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.745 qpair failed and we were unable to recover it. 00:31:36.745 [2024-12-07 05:46:39.850514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.850802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.850811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.745 qpair failed and we were unable to recover it. 00:31:36.745 [2024-12-07 05:46:39.851028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.851309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.851318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.745 qpair failed and we were unable to recover it. 00:31:36.745 [2024-12-07 05:46:39.851643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.851968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.851977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.745 qpair failed and we were unable to recover it. 00:31:36.745 [2024-12-07 05:46:39.852172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.852510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.852520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.745 qpair failed and we were unable to recover it. 00:31:36.745 [2024-12-07 05:46:39.852837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.853130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.853140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.745 qpair failed and we were unable to recover it. 00:31:36.745 [2024-12-07 05:46:39.853249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.853520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.853530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.745 qpair failed and we were unable to recover it. 00:31:36.745 [2024-12-07 05:46:39.853811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.854118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.854127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.745 qpair failed and we were unable to recover it. 00:31:36.745 [2024-12-07 05:46:39.854311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.854649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.854659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.745 qpair failed and we were unable to recover it. 00:31:36.745 [2024-12-07 05:46:39.854988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.855298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.745 [2024-12-07 05:46:39.855308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.746 qpair failed and we were unable to recover it. 00:31:36.746 [2024-12-07 05:46:39.855665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.856029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.856040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.746 qpair failed and we were unable to recover it. 00:31:36.746 [2024-12-07 05:46:39.856135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.856432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.856441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.746 qpair failed and we were unable to recover it. 00:31:36.746 [2024-12-07 05:46:39.856747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.857043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.857053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.746 qpair failed and we were unable to recover it. 00:31:36.746 [2024-12-07 05:46:39.857222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.857392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.857402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.746 qpair failed and we were unable to recover it. 00:31:36.746 [2024-12-07 05:46:39.857673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.857993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.858003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.746 qpair failed and we were unable to recover it. 00:31:36.746 [2024-12-07 05:46:39.858308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.858637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.858647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.746 qpair failed and we were unable to recover it. 00:31:36.746 [2024-12-07 05:46:39.858857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.859140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.859153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.746 qpair failed and we were unable to recover it. 00:31:36.746 [2024-12-07 05:46:39.859504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.859663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.859673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.746 qpair failed and we were unable to recover it. 00:31:36.746 [2024-12-07 05:46:39.859731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.860059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.860069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.746 qpair failed and we were unable to recover it. 00:31:36.746 [2024-12-07 05:46:39.860443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.860762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.860771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.746 qpair failed and we were unable to recover it. 00:31:36.746 [2024-12-07 05:46:39.861085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.861445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.861455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.746 qpair failed and we were unable to recover it. 00:31:36.746 [2024-12-07 05:46:39.861776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.861957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.861966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.746 qpair failed and we were unable to recover it. 00:31:36.746 [2024-12-07 05:46:39.862319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.862656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.862666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.746 qpair failed and we were unable to recover it. 00:31:36.746 [2024-12-07 05:46:39.862954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.863270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.863280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.746 qpair failed and we were unable to recover it. 00:31:36.746 [2024-12-07 05:46:39.863508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.863817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.863827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.746 qpair failed and we were unable to recover it. 00:31:36.746 [2024-12-07 05:46:39.864196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.864479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.864488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.746 qpair failed and we were unable to recover it. 00:31:36.746 [2024-12-07 05:46:39.864797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.865108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.865120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.746 qpair failed and we were unable to recover it. 00:31:36.746 [2024-12-07 05:46:39.865421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.865721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.865730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.746 qpair failed and we were unable to recover it. 00:31:36.746 [2024-12-07 05:46:39.866013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.866091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.866099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.746 qpair failed and we were unable to recover it. 00:31:36.746 [2024-12-07 05:46:39.866386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.866663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.866672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.746 qpair failed and we were unable to recover it. 00:31:36.746 [2024-12-07 05:46:39.866961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.867299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.867309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.746 qpair failed and we were unable to recover it. 00:31:36.746 [2024-12-07 05:46:39.867592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.867909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.867918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.746 qpair failed and we were unable to recover it. 00:31:36.746 [2024-12-07 05:46:39.868221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.868513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.868522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.746 qpair failed and we were unable to recover it. 00:31:36.746 [2024-12-07 05:46:39.868809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.869098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.869108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.746 qpair failed and we were unable to recover it. 00:31:36.746 [2024-12-07 05:46:39.869415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.869710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.869719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.746 qpair failed and we were unable to recover it. 00:31:36.746 [2024-12-07 05:46:39.870020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.870330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.870339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.746 qpair failed and we were unable to recover it. 00:31:36.746 [2024-12-07 05:46:39.870629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.870926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.870936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.746 qpair failed and we were unable to recover it. 00:31:36.746 [2024-12-07 05:46:39.871253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.871591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.871602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.746 qpair failed and we were unable to recover it. 00:31:36.746 [2024-12-07 05:46:39.871904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.746 [2024-12-07 05:46:39.872222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.872233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.747 qpair failed and we were unable to recover it. 00:31:36.747 [2024-12-07 05:46:39.872516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.872759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.872769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.747 qpair failed and we were unable to recover it. 00:31:36.747 [2024-12-07 05:46:39.872952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.873244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.873254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.747 qpair failed and we were unable to recover it. 00:31:36.747 [2024-12-07 05:46:39.873541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.873862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.873872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.747 qpair failed and we were unable to recover it. 00:31:36.747 [2024-12-07 05:46:39.874125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.874414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.874423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.747 qpair failed and we were unable to recover it. 00:31:36.747 [2024-12-07 05:46:39.874776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.875113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.875123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.747 qpair failed and we were unable to recover it. 00:31:36.747 [2024-12-07 05:46:39.875391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.875713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.875723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.747 qpair failed and we were unable to recover it. 00:31:36.747 [2024-12-07 05:46:39.875898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.876105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.876121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.747 qpair failed and we were unable to recover it. 00:31:36.747 [2024-12-07 05:46:39.876448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.876737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.876747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.747 qpair failed and we were unable to recover it. 00:31:36.747 [2024-12-07 05:46:39.877041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.877364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.877373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.747 qpair failed and we were unable to recover it. 00:31:36.747 [2024-12-07 05:46:39.877664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.877853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.877862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.747 qpair failed and we were unable to recover it. 00:31:36.747 [2024-12-07 05:46:39.878137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.878467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.878476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.747 qpair failed and we were unable to recover it. 00:31:36.747 [2024-12-07 05:46:39.878862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.879159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.879169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.747 qpair failed and we were unable to recover it. 00:31:36.747 [2024-12-07 05:46:39.879467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.879755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.879764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.747 qpair failed and we were unable to recover it. 00:31:36.747 [2024-12-07 05:46:39.880070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.880349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.880358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.747 qpair failed and we were unable to recover it. 00:31:36.747 [2024-12-07 05:46:39.880669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.880868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.880877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.747 qpair failed and we were unable to recover it. 00:31:36.747 [2024-12-07 05:46:39.881166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.881465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.881474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.747 qpair failed and we were unable to recover it. 00:31:36.747 [2024-12-07 05:46:39.881766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.882132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.882142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.747 qpair failed and we were unable to recover it. 00:31:36.747 [2024-12-07 05:46:39.882444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.882738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.882748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.747 qpair failed and we were unable to recover it. 00:31:36.747 [2024-12-07 05:46:39.883061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.883376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.883386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.747 qpair failed and we were unable to recover it. 00:31:36.747 [2024-12-07 05:46:39.883699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.883977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.883986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.747 qpair failed and we were unable to recover it. 00:31:36.747 [2024-12-07 05:46:39.884301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.884630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.884639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.747 qpair failed and we were unable to recover it. 00:31:36.747 [2024-12-07 05:46:39.884941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.885254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.885263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.747 qpair failed and we were unable to recover it. 00:31:36.747 [2024-12-07 05:46:39.885558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.885840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.885849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.747 qpair failed and we were unable to recover it. 00:31:36.747 [2024-12-07 05:46:39.886150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.886462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.886472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.747 qpair failed and we were unable to recover it. 00:31:36.747 [2024-12-07 05:46:39.886751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.887061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.887072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.747 qpair failed and we were unable to recover it. 00:31:36.747 [2024-12-07 05:46:39.887379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.887690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.887700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.747 qpair failed and we were unable to recover it. 00:31:36.747 [2024-12-07 05:46:39.888022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.888323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.888333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.747 qpair failed and we were unable to recover it. 00:31:36.747 [2024-12-07 05:46:39.888644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.888869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.747 [2024-12-07 05:46:39.888878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.747 qpair failed and we were unable to recover it. 00:31:36.748 [2024-12-07 05:46:39.889210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.889527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.889540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.748 qpair failed and we were unable to recover it. 00:31:36.748 [2024-12-07 05:46:39.889926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.890218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.890228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.748 qpair failed and we were unable to recover it. 00:31:36.748 [2024-12-07 05:46:39.890396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.890698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.890708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.748 qpair failed and we were unable to recover it. 00:31:36.748 [2024-12-07 05:46:39.890992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.891326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.891335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.748 qpair failed and we were unable to recover it. 00:31:36.748 [2024-12-07 05:46:39.891638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.891935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.891944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.748 qpair failed and we were unable to recover it. 00:31:36.748 [2024-12-07 05:46:39.892234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.892539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.892548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.748 qpair failed and we were unable to recover it. 00:31:36.748 [2024-12-07 05:46:39.892852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.893153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.893162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.748 qpair failed and we were unable to recover it. 00:31:36.748 [2024-12-07 05:46:39.893442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.893733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.893743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.748 qpair failed and we were unable to recover it. 00:31:36.748 [2024-12-07 05:46:39.893894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.894219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.894229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.748 qpair failed and we were unable to recover it. 00:31:36.748 [2024-12-07 05:46:39.894532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.894859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.894868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.748 qpair failed and we were unable to recover it. 00:31:36.748 [2024-12-07 05:46:39.895165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.895460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.895469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.748 qpair failed and we were unable to recover it. 00:31:36.748 [2024-12-07 05:46:39.895782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.896066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.896075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.748 qpair failed and we were unable to recover it. 00:31:36.748 [2024-12-07 05:46:39.896371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.896671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.896680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.748 qpair failed and we were unable to recover it. 00:31:36.748 [2024-12-07 05:46:39.896954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.897285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.897294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.748 qpair failed and we were unable to recover it. 00:31:36.748 [2024-12-07 05:46:39.897466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.897773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.897783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.748 qpair failed and we were unable to recover it. 00:31:36.748 [2024-12-07 05:46:39.898063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.898407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.898417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.748 qpair failed and we were unable to recover it. 00:31:36.748 [2024-12-07 05:46:39.898616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.898932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.898941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.748 qpair failed and we were unable to recover it. 00:31:36.748 [2024-12-07 05:46:39.899248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.899543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.899552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.748 qpair failed and we were unable to recover it. 00:31:36.748 [2024-12-07 05:46:39.899853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.900144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.900154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.748 qpair failed and we were unable to recover it. 00:31:36.748 [2024-12-07 05:46:39.900437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.900637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.900646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.748 qpair failed and we were unable to recover it. 00:31:36.748 [2024-12-07 05:46:39.900951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.901273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.901282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.748 qpair failed and we were unable to recover it. 00:31:36.748 [2024-12-07 05:46:39.901567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.901878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.901887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.748 qpair failed and we were unable to recover it. 00:31:36.748 [2024-12-07 05:46:39.902177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.902498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.902507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.748 qpair failed and we were unable to recover it. 00:31:36.748 [2024-12-07 05:46:39.902795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.903111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.903121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.748 qpair failed and we were unable to recover it. 00:31:36.748 [2024-12-07 05:46:39.903458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.903667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.903676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.748 qpair failed and we were unable to recover it. 00:31:36.748 [2024-12-07 05:46:39.904023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.904350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.904361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.748 qpair failed and we were unable to recover it. 00:31:36.748 [2024-12-07 05:46:39.904707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.904996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.905005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.748 qpair failed and we were unable to recover it. 00:31:36.748 [2024-12-07 05:46:39.905306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.905598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.905607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.748 qpair failed and we were unable to recover it. 00:31:36.748 [2024-12-07 05:46:39.905791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.906134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.748 [2024-12-07 05:46:39.906144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.748 qpair failed and we were unable to recover it. 00:31:36.749 [2024-12-07 05:46:39.906451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.906733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.906743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.749 qpair failed and we were unable to recover it. 00:31:36.749 [2024-12-07 05:46:39.906896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.907118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.907128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.749 qpair failed and we were unable to recover it. 00:31:36.749 [2024-12-07 05:46:39.907458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.907765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.907775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.749 qpair failed and we were unable to recover it. 00:31:36.749 [2024-12-07 05:46:39.908001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.908421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.908431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.749 qpair failed and we were unable to recover it. 00:31:36.749 [2024-12-07 05:46:39.908717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.909007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.909020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.749 qpair failed and we were unable to recover it. 00:31:36.749 [2024-12-07 05:46:39.909309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.909632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.909642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.749 qpair failed and we were unable to recover it. 00:31:36.749 [2024-12-07 05:46:39.909952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.910144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.910154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.749 qpair failed and we were unable to recover it. 00:31:36.749 [2024-12-07 05:46:39.910462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.910791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.910801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.749 qpair failed and we were unable to recover it. 00:31:36.749 [2024-12-07 05:46:39.911118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.911434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.911444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.749 qpair failed and we were unable to recover it. 00:31:36.749 [2024-12-07 05:46:39.911757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.912034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.912044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.749 qpair failed and we were unable to recover it. 00:31:36.749 [2024-12-07 05:46:39.912353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.912643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.912653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.749 qpair failed and we were unable to recover it. 00:31:36.749 [2024-12-07 05:46:39.912978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.913297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.913307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.749 qpair failed and we were unable to recover it. 00:31:36.749 [2024-12-07 05:46:39.913615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.913928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.913938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.749 qpair failed and we were unable to recover it. 00:31:36.749 [2024-12-07 05:46:39.914245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.914558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.914569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.749 qpair failed and we were unable to recover it. 00:31:36.749 [2024-12-07 05:46:39.914875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.915171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.915180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.749 qpair failed and we were unable to recover it. 00:31:36.749 [2024-12-07 05:46:39.915384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.915639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.915649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.749 qpair failed and we were unable to recover it. 00:31:36.749 [2024-12-07 05:46:39.915997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.916295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.916305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.749 qpair failed and we were unable to recover it. 00:31:36.749 [2024-12-07 05:46:39.916610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.916879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.916888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.749 qpair failed and we were unable to recover it. 00:31:36.749 [2024-12-07 05:46:39.917180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.917483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.917492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.749 qpair failed and we were unable to recover it. 00:31:36.749 [2024-12-07 05:46:39.917794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.918076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.918086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.749 qpair failed and we were unable to recover it. 00:31:36.749 [2024-12-07 05:46:39.918387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.918715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.918724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.749 qpair failed and we were unable to recover it. 00:31:36.749 [2024-12-07 05:46:39.919029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.919258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.919267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.749 qpair failed and we were unable to recover it. 00:31:36.749 [2024-12-07 05:46:39.919573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.919881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.919893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.749 qpair failed and we were unable to recover it. 00:31:36.749 [2024-12-07 05:46:39.920189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.920512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.920521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.749 qpair failed and we were unable to recover it. 00:31:36.749 [2024-12-07 05:46:39.920826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.749 [2024-12-07 05:46:39.921125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.921134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.750 qpair failed and we were unable to recover it. 00:31:36.750 [2024-12-07 05:46:39.921437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.921637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.921646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.750 qpair failed and we were unable to recover it. 00:31:36.750 [2024-12-07 05:46:39.921854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.922114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.922123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.750 qpair failed and we were unable to recover it. 00:31:36.750 [2024-12-07 05:46:39.922461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.922788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.922797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.750 qpair failed and we were unable to recover it. 00:31:36.750 [2024-12-07 05:46:39.923077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.923160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.923169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.750 qpair failed and we were unable to recover it. 00:31:36.750 [2024-12-07 05:46:39.923460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.923799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.923809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.750 qpair failed and we were unable to recover it. 00:31:36.750 [2024-12-07 05:46:39.924105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.924395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.924404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.750 qpair failed and we were unable to recover it. 00:31:36.750 [2024-12-07 05:46:39.924700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.924980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.924989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.750 qpair failed and we were unable to recover it. 00:31:36.750 [2024-12-07 05:46:39.925280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.925600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.925609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.750 qpair failed and we were unable to recover it. 00:31:36.750 [2024-12-07 05:46:39.925916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.926244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.926253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.750 qpair failed and we were unable to recover it. 00:31:36.750 [2024-12-07 05:46:39.926565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.926896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.926905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.750 qpair failed and we were unable to recover it. 00:31:36.750 [2024-12-07 05:46:39.927070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.927456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.927465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.750 qpair failed and we were unable to recover it. 00:31:36.750 [2024-12-07 05:46:39.927771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.928067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.928076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.750 qpair failed and we were unable to recover it. 00:31:36.750 [2024-12-07 05:46:39.928379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.928690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.928700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.750 qpair failed and we were unable to recover it. 00:31:36.750 [2024-12-07 05:46:39.929005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.929168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.929180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.750 qpair failed and we were unable to recover it. 00:31:36.750 [2024-12-07 05:46:39.929483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.929797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.929807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.750 qpair failed and we were unable to recover it. 00:31:36.750 [2024-12-07 05:46:39.929966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.930261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.930271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.750 qpair failed and we were unable to recover it. 00:31:36.750 [2024-12-07 05:46:39.930551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.930866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.930875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.750 qpair failed and we were unable to recover it. 00:31:36.750 [2024-12-07 05:46:39.931176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.931464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.931474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.750 qpair failed and we were unable to recover it. 00:31:36.750 [2024-12-07 05:46:39.931756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.932073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.932083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.750 qpair failed and we were unable to recover it. 00:31:36.750 [2024-12-07 05:46:39.932373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.932565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.932575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.750 qpair failed and we were unable to recover it. 00:31:36.750 [2024-12-07 05:46:39.932861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.933176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.933185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.750 qpair failed and we were unable to recover it. 00:31:36.750 [2024-12-07 05:46:39.933522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.933860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.933869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.750 qpair failed and we were unable to recover it. 00:31:36.750 [2024-12-07 05:46:39.934150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.934469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.934478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.750 qpair failed and we were unable to recover it. 00:31:36.750 [2024-12-07 05:46:39.934818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.935135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.935145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.750 qpair failed and we were unable to recover it. 00:31:36.750 [2024-12-07 05:46:39.935418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.935757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.935766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.750 qpair failed and we were unable to recover it. 00:31:36.750 [2024-12-07 05:46:39.936062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.936387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.936396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.750 qpair failed and we were unable to recover it. 00:31:36.750 [2024-12-07 05:46:39.936676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.936989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.936999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.750 qpair failed and we were unable to recover it. 00:31:36.750 [2024-12-07 05:46:39.937191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.937529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.937539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.750 qpair failed and we were unable to recover it. 00:31:36.750 [2024-12-07 05:46:39.937838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.750 [2024-12-07 05:46:39.938029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.938039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.751 qpair failed and we were unable to recover it. 00:31:36.751 [2024-12-07 05:46:39.938341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.938654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.938664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.751 qpair failed and we were unable to recover it. 00:31:36.751 [2024-12-07 05:46:39.938826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.939114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.939124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.751 qpair failed and we were unable to recover it. 00:31:36.751 [2024-12-07 05:46:39.939427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.939752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.939761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.751 qpair failed and we were unable to recover it. 00:31:36.751 [2024-12-07 05:46:39.939951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.940306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.940316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.751 qpair failed and we were unable to recover it. 00:31:36.751 [2024-12-07 05:46:39.940620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.940964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.940973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.751 qpair failed and we were unable to recover it. 00:31:36.751 [2024-12-07 05:46:39.941281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.941579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.941588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.751 qpair failed and we were unable to recover it. 00:31:36.751 [2024-12-07 05:46:39.941886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.942180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.942190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.751 qpair failed and we were unable to recover it. 00:31:36.751 [2024-12-07 05:46:39.942507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.942849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.942859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.751 qpair failed and we were unable to recover it. 00:31:36.751 [2024-12-07 05:46:39.943163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.943473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.943483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.751 qpair failed and we were unable to recover it. 00:31:36.751 [2024-12-07 05:46:39.943762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.944072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.944084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.751 qpair failed and we were unable to recover it. 00:31:36.751 [2024-12-07 05:46:39.944390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.944705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.944715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.751 qpair failed and we were unable to recover it. 00:31:36.751 [2024-12-07 05:46:39.945020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.945300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.945309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.751 qpair failed and we were unable to recover it. 00:31:36.751 [2024-12-07 05:46:39.945623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.945953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.945962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.751 qpair failed and we were unable to recover it. 00:31:36.751 [2024-12-07 05:46:39.946253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.946529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.946538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.751 qpair failed and we were unable to recover it. 00:31:36.751 [2024-12-07 05:46:39.946827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.947139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.947148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.751 qpair failed and we were unable to recover it. 00:31:36.751 [2024-12-07 05:46:39.947420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.947738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.947747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.751 qpair failed and we were unable to recover it. 00:31:36.751 [2024-12-07 05:46:39.948028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.948289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.948298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.751 qpair failed and we were unable to recover it. 00:31:36.751 [2024-12-07 05:46:39.948646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.948950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.948959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.751 qpair failed and we were unable to recover it. 00:31:36.751 [2024-12-07 05:46:39.949259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.949576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.949586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.751 qpair failed and we were unable to recover it. 00:31:36.751 [2024-12-07 05:46:39.949892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.950126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.950138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.751 qpair failed and we were unable to recover it. 00:31:36.751 [2024-12-07 05:46:39.950350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.950613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.950622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.751 qpair failed and we were unable to recover it. 00:31:36.751 [2024-12-07 05:46:39.950997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.951353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.951363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.751 qpair failed and we were unable to recover it. 00:31:36.751 [2024-12-07 05:46:39.951681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.951972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.951983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.751 qpair failed and we were unable to recover it. 00:31:36.751 [2024-12-07 05:46:39.952286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.952599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.952609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.751 qpair failed and we were unable to recover it. 00:31:36.751 [2024-12-07 05:46:39.952811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.953075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.953084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.751 qpair failed and we were unable to recover it. 00:31:36.751 [2024-12-07 05:46:39.953390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.953705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.953714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.751 qpair failed and we were unable to recover it. 00:31:36.751 [2024-12-07 05:46:39.954022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.954333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.954342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.751 qpair failed and we were unable to recover it. 00:31:36.751 [2024-12-07 05:46:39.954626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.954944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.751 [2024-12-07 05:46:39.954953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.751 qpair failed and we were unable to recover it. 00:31:36.751 [2024-12-07 05:46:39.955264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.752 [2024-12-07 05:46:39.955578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.752 [2024-12-07 05:46:39.955587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.752 qpair failed and we were unable to recover it. 00:31:36.752 [2024-12-07 05:46:39.955868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.752 [2024-12-07 05:46:39.956208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.752 [2024-12-07 05:46:39.956217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.752 qpair failed and we were unable to recover it. 00:31:36.752 [2024-12-07 05:46:39.956498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.752 [2024-12-07 05:46:39.956809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.752 [2024-12-07 05:46:39.956818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.752 qpair failed and we were unable to recover it. 00:31:36.752 [2024-12-07 05:46:39.957122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.752 [2024-12-07 05:46:39.957407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.752 [2024-12-07 05:46:39.957417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.752 qpair failed and we were unable to recover it. 00:31:36.752 [2024-12-07 05:46:39.957719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.752 [2024-12-07 05:46:39.957914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.752 [2024-12-07 05:46:39.957924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.752 qpair failed and we were unable to recover it. 00:31:36.752 [2024-12-07 05:46:39.958223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.752 [2024-12-07 05:46:39.958518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.752 [2024-12-07 05:46:39.958527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.752 qpair failed and we were unable to recover it. 00:31:36.752 [2024-12-07 05:46:39.958715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.752 [2024-12-07 05:46:39.959005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.752 [2024-12-07 05:46:39.959018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.752 qpair failed and we were unable to recover it. 00:31:36.752 [2024-12-07 05:46:39.959320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.752 [2024-12-07 05:46:39.959619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.752 [2024-12-07 05:46:39.959629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.752 qpair failed and we were unable to recover it. 00:31:36.752 [2024-12-07 05:46:39.959936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.752 [2024-12-07 05:46:39.960235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.752 [2024-12-07 05:46:39.960244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.752 qpair failed and we were unable to recover it. 00:31:36.752 [2024-12-07 05:46:39.960527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.752 [2024-12-07 05:46:39.960723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.752 [2024-12-07 05:46:39.960733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.752 qpair failed and we were unable to recover it. 00:31:36.752 [2024-12-07 05:46:39.961043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.752 [2024-12-07 05:46:39.961351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.752 [2024-12-07 05:46:39.961360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.752 qpair failed and we were unable to recover it. 00:31:36.752 [2024-12-07 05:46:39.961668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.752 [2024-12-07 05:46:39.961965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.752 [2024-12-07 05:46:39.961974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.752 qpair failed and we were unable to recover it. 00:31:36.752 [2024-12-07 05:46:39.962283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.752 [2024-12-07 05:46:39.962594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.752 [2024-12-07 05:46:39.962603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.752 qpair failed and we were unable to recover it. 00:31:36.752 [2024-12-07 05:46:39.962882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.752 [2024-12-07 05:46:39.963176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.752 [2024-12-07 05:46:39.963185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:36.752 qpair failed and we were unable to recover it. 00:31:36.752 [2024-12-07 05:46:39.963495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.023 [2024-12-07 05:46:39.963788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.023 [2024-12-07 05:46:39.963798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.023 qpair failed and we were unable to recover it. 00:31:37.023 [2024-12-07 05:46:39.964084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.964375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.964384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.024 qpair failed and we were unable to recover it. 00:31:37.024 [2024-12-07 05:46:39.964690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.964999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.965008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.024 qpair failed and we were unable to recover it. 00:31:37.024 [2024-12-07 05:46:39.965294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.965605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.965614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.024 qpair failed and we were unable to recover it. 00:31:37.024 [2024-12-07 05:46:39.965976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.966226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.966235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.024 qpair failed and we were unable to recover it. 00:31:37.024 [2024-12-07 05:46:39.966515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.966799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.966809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.024 qpair failed and we were unable to recover it. 00:31:37.024 [2024-12-07 05:46:39.967056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.967386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.967395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.024 qpair failed and we were unable to recover it. 00:31:37.024 [2024-12-07 05:46:39.967729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.968045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.968055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.024 qpair failed and we were unable to recover it. 00:31:37.024 [2024-12-07 05:46:39.968259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.968534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.968543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.024 qpair failed and we were unable to recover it. 00:31:37.024 [2024-12-07 05:46:39.968849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.969168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.969178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.024 qpair failed and we were unable to recover it. 00:31:37.024 [2024-12-07 05:46:39.969490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.969771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.969781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.024 qpair failed and we were unable to recover it. 00:31:37.024 [2024-12-07 05:46:39.970095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.970398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.970407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.024 qpair failed and we were unable to recover it. 00:31:37.024 [2024-12-07 05:46:39.970690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.970880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.970889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.024 qpair failed and we were unable to recover it. 00:31:37.024 [2024-12-07 05:46:39.971178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.971474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.971483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.024 qpair failed and we were unable to recover it. 00:31:37.024 [2024-12-07 05:46:39.971789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.972073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.972083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.024 qpair failed and we were unable to recover it. 00:31:37.024 [2024-12-07 05:46:39.972451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.972806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.972816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.024 qpair failed and we were unable to recover it. 00:31:37.024 [2024-12-07 05:46:39.973123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.973439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.973449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.024 qpair failed and we were unable to recover it. 00:31:37.024 [2024-12-07 05:46:39.973724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.973996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.974006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.024 qpair failed and we were unable to recover it. 00:31:37.024 [2024-12-07 05:46:39.974317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.974621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.974633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.024 qpair failed and we were unable to recover it. 00:31:37.024 [2024-12-07 05:46:39.974923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.975217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.975227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.024 qpair failed and we were unable to recover it. 00:31:37.024 [2024-12-07 05:46:39.975618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.975934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.975943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.024 qpair failed and we were unable to recover it. 00:31:37.024 [2024-12-07 05:46:39.976244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.976543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.976553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.024 qpair failed and we were unable to recover it. 00:31:37.024 [2024-12-07 05:46:39.976874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.977175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.977185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.024 qpair failed and we were unable to recover it. 00:31:37.024 [2024-12-07 05:46:39.977500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.977811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.977821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.024 qpair failed and we were unable to recover it. 00:31:37.024 [2024-12-07 05:46:39.978127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.978439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.978449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.024 qpair failed and we were unable to recover it. 00:31:37.024 [2024-12-07 05:46:39.978806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.979098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.979108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.024 qpair failed and we were unable to recover it. 00:31:37.024 [2024-12-07 05:46:39.979406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.979706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.979716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.024 qpair failed and we were unable to recover it. 00:31:37.024 [2024-12-07 05:46:39.980051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.980387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.980397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.024 qpair failed and we were unable to recover it. 00:31:37.024 [2024-12-07 05:46:39.980690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.981009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.024 [2024-12-07 05:46:39.981026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.024 qpair failed and we were unable to recover it. 00:31:37.024 [2024-12-07 05:46:39.981362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.981665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.981675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.025 qpair failed and we were unable to recover it. 00:31:37.025 [2024-12-07 05:46:39.981994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.982304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.982315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.025 qpair failed and we were unable to recover it. 00:31:37.025 [2024-12-07 05:46:39.982615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.982792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.982803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.025 qpair failed and we were unable to recover it. 00:31:37.025 [2024-12-07 05:46:39.983119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.983417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.983427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.025 qpair failed and we were unable to recover it. 00:31:37.025 [2024-12-07 05:46:39.983730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.984022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.984033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.025 qpair failed and we were unable to recover it. 00:31:37.025 [2024-12-07 05:46:39.984339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.984532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.984542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.025 qpair failed and we were unable to recover it. 00:31:37.025 [2024-12-07 05:46:39.984883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.985218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.985228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.025 qpair failed and we were unable to recover it. 00:31:37.025 [2024-12-07 05:46:39.985533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.985715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.985726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.025 qpair failed and we were unable to recover it. 00:31:37.025 [2024-12-07 05:46:39.985913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.986207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.986218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.025 qpair failed and we were unable to recover it. 00:31:37.025 [2024-12-07 05:46:39.986524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.986792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.986803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.025 qpair failed and we were unable to recover it. 00:31:37.025 [2024-12-07 05:46:39.987137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.987420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.987430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.025 qpair failed and we were unable to recover it. 00:31:37.025 [2024-12-07 05:46:39.987737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.987898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.987909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.025 qpair failed and we were unable to recover it. 00:31:37.025 [2024-12-07 05:46:39.988177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.988514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.988524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.025 qpair failed and we were unable to recover it. 00:31:37.025 [2024-12-07 05:46:39.988825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.989116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.989126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.025 qpair failed and we were unable to recover it. 00:31:37.025 [2024-12-07 05:46:39.989465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.989744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.989754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.025 qpair failed and we were unable to recover it. 00:31:37.025 [2024-12-07 05:46:39.990151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.990446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.990456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.025 qpair failed and we were unable to recover it. 00:31:37.025 [2024-12-07 05:46:39.990624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.990921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.990930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.025 qpair failed and we were unable to recover it. 00:31:37.025 [2024-12-07 05:46:39.991238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.991564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.991573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.025 qpair failed and we were unable to recover it. 00:31:37.025 [2024-12-07 05:46:39.991736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.992018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.992029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.025 qpair failed and we were unable to recover it. 00:31:37.025 [2024-12-07 05:46:39.992232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.992553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.992562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.025 qpair failed and we were unable to recover it. 00:31:37.025 [2024-12-07 05:46:39.992767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.993043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.993053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.025 qpair failed and we were unable to recover it. 00:31:37.025 [2024-12-07 05:46:39.993339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.993661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.993672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.025 qpair failed and we were unable to recover it. 00:31:37.025 [2024-12-07 05:46:39.993856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.994137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.994147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.025 qpair failed and we were unable to recover it. 00:31:37.025 [2024-12-07 05:46:39.994451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.994783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.994792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.025 qpair failed and we were unable to recover it. 00:31:37.025 [2024-12-07 05:46:39.995101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.995406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.995416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.025 qpair failed and we were unable to recover it. 00:31:37.025 [2024-12-07 05:46:39.995723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.996017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.996027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.025 qpair failed and we were unable to recover it. 00:31:37.025 [2024-12-07 05:46:39.996369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.996669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.996678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.025 qpair failed and we were unable to recover it. 00:31:37.025 [2024-12-07 05:46:39.996950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.997273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.997283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.025 qpair failed and we were unable to recover it. 00:31:37.025 [2024-12-07 05:46:39.997480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.997695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.025 [2024-12-07 05:46:39.997705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.025 qpair failed and we were unable to recover it. 00:31:37.026 [2024-12-07 05:46:39.997989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:39.998176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:39.998188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.026 qpair failed and we were unable to recover it. 00:31:37.026 [2024-12-07 05:46:39.998552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:39.998871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:39.998881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.026 qpair failed and we were unable to recover it. 00:31:37.026 [2024-12-07 05:46:39.999176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:39.999470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:39.999480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.026 qpair failed and we were unable to recover it. 00:31:37.026 [2024-12-07 05:46:39.999790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.000117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.000126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.026 qpair failed and we were unable to recover it. 00:31:37.026 [2024-12-07 05:46:40.000444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.000769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.000779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.026 qpair failed and we were unable to recover it. 00:31:37.026 [2024-12-07 05:46:40.000955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.001226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.001237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.026 qpair failed and we were unable to recover it. 00:31:37.026 [2024-12-07 05:46:40.001566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.002309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.002322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.026 qpair failed and we were unable to recover it. 00:31:37.026 [2024-12-07 05:46:40.002714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.003100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.003110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.026 qpair failed and we were unable to recover it. 00:31:37.026 [2024-12-07 05:46:40.003401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.003635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.003644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.026 qpair failed and we were unable to recover it. 00:31:37.026 [2024-12-07 05:46:40.003964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.004276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.004286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.026 qpair failed and we were unable to recover it. 00:31:37.026 [2024-12-07 05:46:40.004589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.004916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.004926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.026 qpair failed and we were unable to recover it. 00:31:37.026 [2024-12-07 05:46:40.005097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.005315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.005329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.026 qpair failed and we were unable to recover it. 00:31:37.026 [2024-12-07 05:46:40.005644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.005946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.005955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.026 qpair failed and we were unable to recover it. 00:31:37.026 [2024-12-07 05:46:40.006300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.006616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.006626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.026 qpair failed and we were unable to recover it. 00:31:37.026 [2024-12-07 05:46:40.006942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.007238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.007248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.026 qpair failed and we were unable to recover it. 00:31:37.026 [2024-12-07 05:46:40.007562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.007902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.007912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.026 qpair failed and we were unable to recover it. 00:31:37.026 [2024-12-07 05:46:40.008227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.008527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.008537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.026 qpair failed and we were unable to recover it. 00:31:37.026 [2024-12-07 05:46:40.008851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.009161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.009172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.026 qpair failed and we were unable to recover it. 00:31:37.026 [2024-12-07 05:46:40.009359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.009728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.009737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.026 qpair failed and we were unable to recover it. 00:31:37.026 [2024-12-07 05:46:40.010048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.010469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.010479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.026 qpair failed and we were unable to recover it. 00:31:37.026 [2024-12-07 05:46:40.010681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.010882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.010893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.026 qpair failed and we were unable to recover it. 00:31:37.026 [2024-12-07 05:46:40.011253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.011557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.011567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.026 qpair failed and we were unable to recover it. 00:31:37.026 [2024-12-07 05:46:40.011785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.012020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.012029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.026 qpair failed and we were unable to recover it. 00:31:37.026 [2024-12-07 05:46:40.012348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.012537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.012546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.026 qpair failed and we were unable to recover it. 00:31:37.026 [2024-12-07 05:46:40.012860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.013064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.013074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.026 qpair failed and we were unable to recover it. 00:31:37.026 [2024-12-07 05:46:40.013383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.013554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.013565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.026 qpair failed and we were unable to recover it. 00:31:37.026 [2024-12-07 05:46:40.013878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.014094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.014104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.026 qpair failed and we were unable to recover it. 00:31:37.026 [2024-12-07 05:46:40.014443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.014772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.014782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.026 qpair failed and we were unable to recover it. 00:31:37.026 [2024-12-07 05:46:40.015082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.026 [2024-12-07 05:46:40.015387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.015397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.027 qpair failed and we were unable to recover it. 00:31:37.027 [2024-12-07 05:46:40.015708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.016036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.016046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.027 qpair failed and we were unable to recover it. 00:31:37.027 [2024-12-07 05:46:40.016364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.016701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.016710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.027 qpair failed and we were unable to recover it. 00:31:37.027 [2024-12-07 05:46:40.016903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.017234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.017244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.027 qpair failed and we were unable to recover it. 00:31:37.027 [2024-12-07 05:46:40.017550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.017841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.017850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.027 qpair failed and we were unable to recover it. 00:31:37.027 [2024-12-07 05:46:40.018041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.018317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.018326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.027 qpair failed and we were unable to recover it. 00:31:37.027 [2024-12-07 05:46:40.018644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.018842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.018852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.027 qpair failed and we were unable to recover it. 00:31:37.027 [2024-12-07 05:46:40.019162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.019475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.019485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.027 qpair failed and we were unable to recover it. 00:31:37.027 [2024-12-07 05:46:40.019797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.020077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.020087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.027 qpair failed and we were unable to recover it. 00:31:37.027 [2024-12-07 05:46:40.020422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.020745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.020754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.027 qpair failed and we were unable to recover it. 00:31:37.027 [2024-12-07 05:46:40.021068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.021381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.021390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.027 qpair failed and we were unable to recover it. 00:31:37.027 [2024-12-07 05:46:40.021697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.021981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.021990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.027 qpair failed and we were unable to recover it. 00:31:37.027 [2024-12-07 05:46:40.022208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.022543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.022553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.027 qpair failed and we were unable to recover it. 00:31:37.027 [2024-12-07 05:46:40.022838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.023150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.023159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.027 qpair failed and we were unable to recover it. 00:31:37.027 [2024-12-07 05:46:40.023431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.023620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.023631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.027 qpair failed and we were unable to recover it. 00:31:37.027 [2024-12-07 05:46:40.023985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.024297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.024307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.027 qpair failed and we were unable to recover it. 00:31:37.027 [2024-12-07 05:46:40.024640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.024993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.025002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.027 qpair failed and we were unable to recover it. 00:31:37.027 [2024-12-07 05:46:40.025222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.025417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.025426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.027 qpair failed and we were unable to recover it. 00:31:37.027 [2024-12-07 05:46:40.025602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.025916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.025926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.027 qpair failed and we were unable to recover it. 00:31:37.027 [2024-12-07 05:46:40.026246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.026658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.026668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.027 qpair failed and we were unable to recover it. 00:31:37.027 [2024-12-07 05:46:40.026984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.027338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.027349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.027 qpair failed and we were unable to recover it. 00:31:37.027 [2024-12-07 05:46:40.027664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.027855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.027866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.027 qpair failed and we were unable to recover it. 00:31:37.027 [2024-12-07 05:46:40.028194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.028506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.028515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.027 qpair failed and we were unable to recover it. 00:31:37.027 [2024-12-07 05:46:40.028839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.029036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.029046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.027 qpair failed and we were unable to recover it. 00:31:37.027 [2024-12-07 05:46:40.029391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.029695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.029707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.027 qpair failed and we were unable to recover it. 00:31:37.027 [2024-12-07 05:46:40.029932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.030250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.030261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.027 qpair failed and we were unable to recover it. 00:31:37.027 [2024-12-07 05:46:40.030544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.030744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.030754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.027 qpair failed and we were unable to recover it. 00:31:37.027 [2024-12-07 05:46:40.031059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.031373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.031383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.027 qpair failed and we were unable to recover it. 00:31:37.027 [2024-12-07 05:46:40.031696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.032061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.027 [2024-12-07 05:46:40.032071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.027 qpair failed and we were unable to recover it. 00:31:37.027 [2024-12-07 05:46:40.032339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.032622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.032631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.028 qpair failed and we were unable to recover it. 00:31:37.028 [2024-12-07 05:46:40.032794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.033037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.033046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.028 qpair failed and we were unable to recover it. 00:31:37.028 [2024-12-07 05:46:40.033350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.033664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.033675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.028 qpair failed and we were unable to recover it. 00:31:37.028 [2024-12-07 05:46:40.033947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.034080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.034091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.028 qpair failed and we were unable to recover it. 00:31:37.028 [2024-12-07 05:46:40.034417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.034731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.034741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.028 qpair failed and we were unable to recover it. 00:31:37.028 [2024-12-07 05:46:40.035053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.035265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.035277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.028 qpair failed and we were unable to recover it. 00:31:37.028 [2024-12-07 05:46:40.035563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.035890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.035900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.028 qpair failed and we were unable to recover it. 00:31:37.028 [2024-12-07 05:46:40.036127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.036435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.036445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.028 qpair failed and we were unable to recover it. 00:31:37.028 [2024-12-07 05:46:40.036734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.036951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.036962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.028 qpair failed and we were unable to recover it. 00:31:37.028 [2024-12-07 05:46:40.037258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.037574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.037584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.028 qpair failed and we were unable to recover it. 00:31:37.028 [2024-12-07 05:46:40.037971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.038288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.038299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.028 qpair failed and we were unable to recover it. 00:31:37.028 [2024-12-07 05:46:40.038609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.038803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.038813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.028 qpair failed and we were unable to recover it. 00:31:37.028 [2024-12-07 05:46:40.039201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.039493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.039502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.028 qpair failed and we were unable to recover it. 00:31:37.028 [2024-12-07 05:46:40.039827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.040145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.040154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.028 qpair failed and we were unable to recover it. 00:31:37.028 [2024-12-07 05:46:40.040471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.040802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.040812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.028 qpair failed and we were unable to recover it. 00:31:37.028 [2024-12-07 05:46:40.041120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.041413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.041422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.028 qpair failed and we were unable to recover it. 00:31:37.028 [2024-12-07 05:46:40.041629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.041897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.041906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.028 qpair failed and we were unable to recover it. 00:31:37.028 [2024-12-07 05:46:40.042106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.042478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.042488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.028 qpair failed and we were unable to recover it. 00:31:37.028 [2024-12-07 05:46:40.042675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.042980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.042990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.028 qpair failed and we were unable to recover it. 00:31:37.028 [2024-12-07 05:46:40.043297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.043598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.043608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.028 qpair failed and we were unable to recover it. 00:31:37.028 [2024-12-07 05:46:40.043994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.044292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.044302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.028 qpair failed and we were unable to recover it. 00:31:37.028 [2024-12-07 05:46:40.044613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.044902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.044912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.028 qpair failed and we were unable to recover it. 00:31:37.028 [2024-12-07 05:46:40.045226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.045539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.045549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.028 qpair failed and we were unable to recover it. 00:31:37.028 [2024-12-07 05:46:40.045857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.046167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.028 [2024-12-07 05:46:40.046177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.028 qpair failed and we were unable to recover it. 00:31:37.028 [2024-12-07 05:46:40.046363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.046711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.046720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.029 qpair failed and we were unable to recover it. 00:31:37.029 [2024-12-07 05:46:40.047029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.047377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.047386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.029 qpair failed and we were unable to recover it. 00:31:37.029 [2024-12-07 05:46:40.047703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.048009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.048022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.029 qpair failed and we were unable to recover it. 00:31:37.029 [2024-12-07 05:46:40.048383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.048692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.048702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.029 qpair failed and we were unable to recover it. 00:31:37.029 [2024-12-07 05:46:40.049016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.049333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.049343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.029 qpair failed and we were unable to recover it. 00:31:37.029 [2024-12-07 05:46:40.049654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.049972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.049981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.029 qpair failed and we were unable to recover it. 00:31:37.029 [2024-12-07 05:46:40.050202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.050387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.050396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.029 qpair failed and we were unable to recover it. 00:31:37.029 [2024-12-07 05:46:40.050709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.051015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.051025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.029 qpair failed and we were unable to recover it. 00:31:37.029 [2024-12-07 05:46:40.051398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.051599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.051608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.029 qpair failed and we were unable to recover it. 00:31:37.029 [2024-12-07 05:46:40.051922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.052118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.052128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.029 qpair failed and we were unable to recover it. 00:31:37.029 [2024-12-07 05:46:40.052460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.052780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.052790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.029 qpair failed and we were unable to recover it. 00:31:37.029 [2024-12-07 05:46:40.052980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.053271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.053281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.029 qpair failed and we were unable to recover it. 00:31:37.029 [2024-12-07 05:46:40.053581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.053902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.053912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.029 qpair failed and we were unable to recover it. 00:31:37.029 [2024-12-07 05:46:40.054124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.054351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.054360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.029 qpair failed and we were unable to recover it. 00:31:37.029 [2024-12-07 05:46:40.054543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.054845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.054855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.029 qpair failed and we were unable to recover it. 00:31:37.029 [2024-12-07 05:46:40.055025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.055197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.055207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.029 qpair failed and we were unable to recover it. 00:31:37.029 [2024-12-07 05:46:40.055516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.055800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.055809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.029 qpair failed and we were unable to recover it. 00:31:37.029 [2024-12-07 05:46:40.055974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.056237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.056248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.029 qpair failed and we were unable to recover it. 00:31:37.029 [2024-12-07 05:46:40.056574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.056897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.056906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.029 qpair failed and we were unable to recover it. 00:31:37.029 [2024-12-07 05:46:40.057219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.057542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.057552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.029 qpair failed and we were unable to recover it. 00:31:37.029 [2024-12-07 05:46:40.057749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.058081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.058091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.029 qpair failed and we were unable to recover it. 00:31:37.029 [2024-12-07 05:46:40.058461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.058686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.058694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.029 qpair failed and we were unable to recover it. 00:31:37.029 [2024-12-07 05:46:40.058982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.059297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.059308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.029 qpair failed and we were unable to recover it. 00:31:37.029 [2024-12-07 05:46:40.059611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.059944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.059953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.029 qpair failed and we were unable to recover it. 00:31:37.029 [2024-12-07 05:46:40.060214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.060306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.060315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.029 qpair failed and we were unable to recover it. 00:31:37.029 [2024-12-07 05:46:40.060628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.060963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.060971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.029 qpair failed and we were unable to recover it. 00:31:37.029 [2024-12-07 05:46:40.061149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.061308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.061317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.029 qpair failed and we were unable to recover it. 00:31:37.029 [2024-12-07 05:46:40.061621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.061905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.061915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.029 qpair failed and we were unable to recover it. 00:31:37.029 [2024-12-07 05:46:40.062286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.062567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.029 [2024-12-07 05:46:40.062577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.029 qpair failed and we were unable to recover it. 00:31:37.030 [2024-12-07 05:46:40.062900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.063126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.063137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.030 qpair failed and we were unable to recover it. 00:31:37.030 [2024-12-07 05:46:40.063305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.063612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.063622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.030 qpair failed and we were unable to recover it. 00:31:37.030 [2024-12-07 05:46:40.063959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.064182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.064193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.030 qpair failed and we were unable to recover it. 00:31:37.030 [2024-12-07 05:46:40.064384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.064561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.064571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.030 qpair failed and we were unable to recover it. 00:31:37.030 [2024-12-07 05:46:40.064934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.065222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.065232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.030 qpair failed and we were unable to recover it. 00:31:37.030 [2024-12-07 05:46:40.065502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.065828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.065838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.030 qpair failed and we were unable to recover it. 00:31:37.030 [2024-12-07 05:46:40.066018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.066354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.066364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.030 qpair failed and we were unable to recover it. 00:31:37.030 [2024-12-07 05:46:40.066542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.066777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.066787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.030 qpair failed and we were unable to recover it. 00:31:37.030 [2024-12-07 05:46:40.067112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.067420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.067430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.030 qpair failed and we were unable to recover it. 00:31:37.030 [2024-12-07 05:46:40.067743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.067807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.067818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.030 qpair failed and we were unable to recover it. 00:31:37.030 [2024-12-07 05:46:40.068060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.068347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.068357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.030 qpair failed and we were unable to recover it. 00:31:37.030 [2024-12-07 05:46:40.068539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.068836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.068846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.030 qpair failed and we were unable to recover it. 00:31:37.030 [2024-12-07 05:46:40.069238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.069568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.069578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.030 qpair failed and we were unable to recover it. 00:31:37.030 [2024-12-07 05:46:40.069763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.070086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.070096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.030 qpair failed and we were unable to recover it. 00:31:37.030 [2024-12-07 05:46:40.070286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.070596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.070606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.030 qpair failed and we were unable to recover it. 00:31:37.030 [2024-12-07 05:46:40.070786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.071071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.071082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.030 qpair failed and we were unable to recover it. 00:31:37.030 [2024-12-07 05:46:40.071425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.071753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.071764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.030 qpair failed and we were unable to recover it. 00:31:37.030 [2024-12-07 05:46:40.072086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.072977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.072989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.030 qpair failed and we were unable to recover it. 00:31:37.030 [2024-12-07 05:46:40.073330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.073648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.073659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.030 qpair failed and we were unable to recover it. 00:31:37.030 [2024-12-07 05:46:40.073867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.074173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.074184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.030 qpair failed and we were unable to recover it. 00:31:37.030 [2024-12-07 05:46:40.074490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.074696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.074706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.030 qpair failed and we were unable to recover it. 00:31:37.030 [2024-12-07 05:46:40.074995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.075222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.075232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.030 qpair failed and we were unable to recover it. 00:31:37.030 [2024-12-07 05:46:40.075523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.075845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.075854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.030 qpair failed and we were unable to recover it. 00:31:37.030 [2024-12-07 05:46:40.076089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.076454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.076463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.030 qpair failed and we were unable to recover it. 00:31:37.030 [2024-12-07 05:46:40.076715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.077028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.077038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.030 qpair failed and we were unable to recover it. 00:31:37.030 [2024-12-07 05:46:40.077416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.077728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.077737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.030 qpair failed and we were unable to recover it. 00:31:37.030 [2024-12-07 05:46:40.078038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.078355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.078364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.030 qpair failed and we were unable to recover it. 00:31:37.030 [2024-12-07 05:46:40.078684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.079001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.079017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.030 qpair failed and we were unable to recover it. 00:31:37.030 [2024-12-07 05:46:40.079215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.079461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.030 [2024-12-07 05:46:40.079470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.031 qpair failed and we were unable to recover it. 00:31:37.031 [2024-12-07 05:46:40.079683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.080020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.080030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.031 qpair failed and we were unable to recover it. 00:31:37.031 [2024-12-07 05:46:40.080312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.080510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.080520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.031 qpair failed and we were unable to recover it. 00:31:37.031 [2024-12-07 05:46:40.080820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.081111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.081121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.031 qpair failed and we were unable to recover it. 00:31:37.031 [2024-12-07 05:46:40.081408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.081731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.081740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.031 qpair failed and we were unable to recover it. 00:31:37.031 [2024-12-07 05:46:40.082023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.082341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.082350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.031 qpair failed and we were unable to recover it. 00:31:37.031 [2024-12-07 05:46:40.082413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.082725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.082736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.031 qpair failed and we were unable to recover it. 00:31:37.031 [2024-12-07 05:46:40.083017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.083413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.083423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.031 qpair failed and we were unable to recover it. 00:31:37.031 [2024-12-07 05:46:40.083732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.084023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.084034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.031 qpair failed and we were unable to recover it. 00:31:37.031 [2024-12-07 05:46:40.084314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.084644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.084653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.031 qpair failed and we were unable to recover it. 00:31:37.031 [2024-12-07 05:46:40.084815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.085078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.085089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.031 qpair failed and we were unable to recover it. 00:31:37.031 [2024-12-07 05:46:40.085439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.085534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.085543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.031 qpair failed and we were unable to recover it. 00:31:37.031 [2024-12-07 05:46:40.085765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.086022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.086033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.031 qpair failed and we were unable to recover it. 00:31:37.031 [2024-12-07 05:46:40.086336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.086672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.086682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.031 qpair failed and we were unable to recover it. 00:31:37.031 [2024-12-07 05:46:40.086893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.087090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.087099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.031 qpair failed and we were unable to recover it. 00:31:37.031 [2024-12-07 05:46:40.087431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.087618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.087627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.031 qpair failed and we were unable to recover it. 00:31:37.031 [2024-12-07 05:46:40.087828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.087924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.087937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.031 qpair failed and we were unable to recover it. 00:31:37.031 [2024-12-07 05:46:40.088247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.088544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.088553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.031 qpair failed and we were unable to recover it. 00:31:37.031 [2024-12-07 05:46:40.088872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.089191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.089201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.031 qpair failed and we were unable to recover it. 00:31:37.031 [2024-12-07 05:46:40.089490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.089760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.089769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.031 qpair failed and we were unable to recover it. 00:31:37.031 [2024-12-07 05:46:40.089968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.090167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.090177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.031 qpair failed and we were unable to recover it. 00:31:37.031 [2024-12-07 05:46:40.090366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.090540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.090550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.031 qpair failed and we were unable to recover it. 00:31:37.031 [2024-12-07 05:46:40.090825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.091129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.091138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.031 qpair failed and we were unable to recover it. 00:31:37.031 [2024-12-07 05:46:40.091436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.091742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.091753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.031 qpair failed and we were unable to recover it. 00:31:37.031 [2024-12-07 05:46:40.092068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.092243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.092252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.031 qpair failed and we were unable to recover it. 00:31:37.031 [2024-12-07 05:46:40.092508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.092844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.092853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.031 qpair failed and we were unable to recover it. 00:31:37.031 [2024-12-07 05:46:40.093152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.093426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.093436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.031 qpair failed and we were unable to recover it. 00:31:37.031 [2024-12-07 05:46:40.093759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.093957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.093966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.031 qpair failed and we were unable to recover it. 00:31:37.031 [2024-12-07 05:46:40.094305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.094616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.031 [2024-12-07 05:46:40.094626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.031 qpair failed and we were unable to recover it. 00:31:37.032 [2024-12-07 05:46:40.094934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.095121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.095131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.032 qpair failed and we were unable to recover it. 00:31:37.032 [2024-12-07 05:46:40.095428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.095746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.095756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.032 qpair failed and we were unable to recover it. 00:31:37.032 [2024-12-07 05:46:40.096066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.096380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.096390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.032 qpair failed and we were unable to recover it. 00:31:37.032 [2024-12-07 05:46:40.096649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.096858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.096868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.032 qpair failed and we were unable to recover it. 00:31:37.032 [2024-12-07 05:46:40.097165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.097475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.097485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.032 qpair failed and we were unable to recover it. 00:31:37.032 [2024-12-07 05:46:40.097656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.097996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.098006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.032 qpair failed and we were unable to recover it. 00:31:37.032 [2024-12-07 05:46:40.098314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.098529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.098539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.032 qpair failed and we were unable to recover it. 00:31:37.032 [2024-12-07 05:46:40.098742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.099086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.099095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.032 qpair failed and we were unable to recover it. 00:31:37.032 [2024-12-07 05:46:40.099417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.099737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.099746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.032 qpair failed and we were unable to recover it. 00:31:37.032 [2024-12-07 05:46:40.100053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.100230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.100239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.032 qpair failed and we were unable to recover it. 00:31:37.032 [2024-12-07 05:46:40.100570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.100900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.100909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.032 qpair failed and we were unable to recover it. 00:31:37.032 [2024-12-07 05:46:40.101261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.101563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.101573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.032 qpair failed and we were unable to recover it. 00:31:37.032 [2024-12-07 05:46:40.101884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.102228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.102238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.032 qpair failed and we were unable to recover it. 00:31:37.032 [2024-12-07 05:46:40.102555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.102833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.102842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.032 qpair failed and we were unable to recover it. 00:31:37.032 [2024-12-07 05:46:40.103178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.103482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.103491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.032 qpair failed and we were unable to recover it. 00:31:37.032 [2024-12-07 05:46:40.103548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.103845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.103854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.032 qpair failed and we were unable to recover it. 00:31:37.032 [2024-12-07 05:46:40.104204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.104517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.104526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.032 qpair failed and we were unable to recover it. 00:31:37.032 [2024-12-07 05:46:40.104763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.104831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.104840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.032 qpair failed and we were unable to recover it. 00:31:37.032 [2024-12-07 05:46:40.105115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.105459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.105469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.032 qpair failed and we were unable to recover it. 00:31:37.032 [2024-12-07 05:46:40.105784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.106136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.106146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.032 qpair failed and we were unable to recover it. 00:31:37.032 [2024-12-07 05:46:40.106436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.106770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.106779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.032 qpair failed and we were unable to recover it. 00:31:37.032 [2024-12-07 05:46:40.106981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.107324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.107334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.032 qpair failed and we were unable to recover it. 00:31:37.032 [2024-12-07 05:46:40.107507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.107775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.107784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.032 qpair failed and we were unable to recover it. 00:31:37.032 [2024-12-07 05:46:40.108069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.108261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.108271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.032 qpair failed and we were unable to recover it. 00:31:37.032 [2024-12-07 05:46:40.108594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.108765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.108774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.032 qpair failed and we were unable to recover it. 00:31:37.032 [2024-12-07 05:46:40.109045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.109383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.109393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.032 qpair failed and we were unable to recover it. 00:31:37.032 [2024-12-07 05:46:40.109698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.109996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.110006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.032 qpair failed and we were unable to recover it. 00:31:37.032 [2024-12-07 05:46:40.110321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.110604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.110615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.032 qpair failed and we were unable to recover it. 00:31:37.032 [2024-12-07 05:46:40.110950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.111236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.032 [2024-12-07 05:46:40.111248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.032 qpair failed and we were unable to recover it. 00:31:37.033 [2024-12-07 05:46:40.111555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.111876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.111886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.033 qpair failed and we were unable to recover it. 00:31:37.033 [2024-12-07 05:46:40.112218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.112511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.112521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.033 qpair failed and we were unable to recover it. 00:31:37.033 [2024-12-07 05:46:40.112710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.112992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.113002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.033 qpair failed and we were unable to recover it. 00:31:37.033 [2024-12-07 05:46:40.113298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.113598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.113608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.033 qpair failed and we were unable to recover it. 00:31:37.033 [2024-12-07 05:46:40.113888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.114082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.114092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.033 qpair failed and we were unable to recover it. 00:31:37.033 [2024-12-07 05:46:40.114451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.114843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.114853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.033 qpair failed and we were unable to recover it. 00:31:37.033 [2024-12-07 05:46:40.115075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.115409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.115426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.033 qpair failed and we were unable to recover it. 00:31:37.033 [2024-12-07 05:46:40.115740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.116053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.116063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.033 qpair failed and we were unable to recover it. 00:31:37.033 [2024-12-07 05:46:40.116371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.116674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.116683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.033 qpair failed and we were unable to recover it. 00:31:37.033 [2024-12-07 05:46:40.116858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.117137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.117148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.033 qpair failed and we were unable to recover it. 00:31:37.033 [2024-12-07 05:46:40.117355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.117531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.117540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.033 qpair failed and we were unable to recover it. 00:31:37.033 [2024-12-07 05:46:40.117803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.118140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.118149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.033 qpair failed and we were unable to recover it. 00:31:37.033 [2024-12-07 05:46:40.118538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.118843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.118860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.033 qpair failed and we were unable to recover it. 00:31:37.033 [2024-12-07 05:46:40.119176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.119466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.119476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.033 qpair failed and we were unable to recover it. 00:31:37.033 [2024-12-07 05:46:40.119664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.120015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.120025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.033 qpair failed and we were unable to recover it. 00:31:37.033 [2024-12-07 05:46:40.120256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.120539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.120549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.033 qpair failed and we were unable to recover it. 00:31:37.033 [2024-12-07 05:46:40.120854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.121163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.121173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.033 qpair failed and we were unable to recover it. 00:31:37.033 [2024-12-07 05:46:40.121507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.121825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.121835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.033 qpair failed and we were unable to recover it. 00:31:37.033 [2024-12-07 05:46:40.122148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.122439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.122449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.033 qpair failed and we were unable to recover it. 00:31:37.033 [2024-12-07 05:46:40.122758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.123080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.123090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.033 qpair failed and we were unable to recover it. 00:31:37.033 [2024-12-07 05:46:40.123456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.123761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.123771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.033 qpair failed and we were unable to recover it. 00:31:37.033 [2024-12-07 05:46:40.124086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.124309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.124318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.033 qpair failed and we were unable to recover it. 00:31:37.033 [2024-12-07 05:46:40.124604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.124797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.124807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.033 qpair failed and we were unable to recover it. 00:31:37.033 [2024-12-07 05:46:40.125115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.125295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.125305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.033 qpair failed and we were unable to recover it. 00:31:37.033 [2024-12-07 05:46:40.125466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.125672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.033 [2024-12-07 05:46:40.125683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.033 qpair failed and we were unable to recover it. 00:31:37.033 [2024-12-07 05:46:40.125972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.126175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.126186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.034 qpair failed and we were unable to recover it. 00:31:37.034 [2024-12-07 05:46:40.126359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.126555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.126565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.034 qpair failed and we were unable to recover it. 00:31:37.034 [2024-12-07 05:46:40.126873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.127102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.127113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.034 qpair failed and we were unable to recover it. 00:31:37.034 [2024-12-07 05:46:40.127432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.127725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.127736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.034 qpair failed and we were unable to recover it. 00:31:37.034 [2024-12-07 05:46:40.127953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.128145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.128156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.034 qpair failed and we were unable to recover it. 00:31:37.034 [2024-12-07 05:46:40.128247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.128445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.128455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.034 qpair failed and we were unable to recover it. 00:31:37.034 [2024-12-07 05:46:40.128838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.129153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.129163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.034 qpair failed and we were unable to recover it. 00:31:37.034 [2024-12-07 05:46:40.129508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.129856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.129866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.034 qpair failed and we were unable to recover it. 00:31:37.034 [2024-12-07 05:46:40.130181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.130492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.130502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.034 qpair failed and we were unable to recover it. 00:31:37.034 [2024-12-07 05:46:40.130783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.131103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.131113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.034 qpair failed and we were unable to recover it. 00:31:37.034 [2024-12-07 05:46:40.131396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.131569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.131579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.034 qpair failed and we were unable to recover it. 00:31:37.034 [2024-12-07 05:46:40.131882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.132206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.132216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.034 qpair failed and we were unable to recover it. 00:31:37.034 [2024-12-07 05:46:40.132417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.132750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.132759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.034 qpair failed and we were unable to recover it. 00:31:37.034 [2024-12-07 05:46:40.132867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.133212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.133223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.034 qpair failed and we were unable to recover it. 00:31:37.034 [2024-12-07 05:46:40.134431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.134813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.134825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.034 qpair failed and we were unable to recover it. 00:31:37.034 [2024-12-07 05:46:40.135174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.135483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.135493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.034 qpair failed and we were unable to recover it. 00:31:37.034 [2024-12-07 05:46:40.135773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.136035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.136045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.034 qpair failed and we were unable to recover it. 00:31:37.034 [2024-12-07 05:46:40.136333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.136660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.136671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.034 qpair failed and we were unable to recover it. 00:31:37.034 [2024-12-07 05:46:40.136955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.137359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.137369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.034 qpair failed and we were unable to recover it. 00:31:37.034 [2024-12-07 05:46:40.137675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.137902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.137913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.034 qpair failed and we were unable to recover it. 00:31:37.034 [2024-12-07 05:46:40.138117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.138376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.138385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.034 qpair failed and we were unable to recover it. 00:31:37.034 [2024-12-07 05:46:40.138705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.138880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.138890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.034 qpair failed and we were unable to recover it. 00:31:37.034 [2024-12-07 05:46:40.139103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.139407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.139416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.034 qpair failed and we were unable to recover it. 00:31:37.034 [2024-12-07 05:46:40.139727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.139919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.139928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.034 qpair failed and we were unable to recover it. 00:31:37.034 [2024-12-07 05:46:40.140197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.140521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.140531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.034 qpair failed and we were unable to recover it. 00:31:37.034 [2024-12-07 05:46:40.140736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.141068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.141081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.034 qpair failed and we were unable to recover it. 00:31:37.034 [2024-12-07 05:46:40.141392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.141562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.141573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.034 qpair failed and we were unable to recover it. 00:31:37.034 [2024-12-07 05:46:40.141862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.142176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.142186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.034 qpair failed and we were unable to recover it. 00:31:37.034 [2024-12-07 05:46:40.142476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.142797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.034 [2024-12-07 05:46:40.142806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.034 qpair failed and we were unable to recover it. 00:31:37.035 [2024-12-07 05:46:40.143105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.143400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.143410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.035 qpair failed and we were unable to recover it. 00:31:37.035 [2024-12-07 05:46:40.143685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.144005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.144018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.035 qpair failed and we were unable to recover it. 00:31:37.035 [2024-12-07 05:46:40.144341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.144660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.144669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.035 qpair failed and we were unable to recover it. 00:31:37.035 [2024-12-07 05:46:40.144950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.145158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.145168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.035 qpair failed and we were unable to recover it. 00:31:37.035 [2024-12-07 05:46:40.145444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.145765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.145774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.035 qpair failed and we were unable to recover it. 00:31:37.035 [2024-12-07 05:46:40.146056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.146324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.146334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.035 qpair failed and we were unable to recover it. 00:31:37.035 [2024-12-07 05:46:40.146644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.146955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.146964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.035 qpair failed and we were unable to recover it. 00:31:37.035 [2024-12-07 05:46:40.147274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.147469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.147478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.035 qpair failed and we were unable to recover it. 00:31:37.035 [2024-12-07 05:46:40.147828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.148113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.148123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.035 qpair failed and we were unable to recover it. 00:31:37.035 [2024-12-07 05:46:40.148431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.148711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.148720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.035 qpair failed and we were unable to recover it. 00:31:37.035 [2024-12-07 05:46:40.149002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.149309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.149319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.035 qpair failed and we were unable to recover it. 00:31:37.035 [2024-12-07 05:46:40.149608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.149925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.149934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.035 qpair failed and we were unable to recover it. 00:31:37.035 [2024-12-07 05:46:40.150124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.150479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.150489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.035 qpair failed and we were unable to recover it. 00:31:37.035 [2024-12-07 05:46:40.150792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.151904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.151928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.035 qpair failed and we were unable to recover it. 00:31:37.035 [2024-12-07 05:46:40.152250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.152597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.152606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.035 qpair failed and we were unable to recover it. 00:31:37.035 [2024-12-07 05:46:40.152899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.153194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.153204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.035 qpair failed and we were unable to recover it. 00:31:37.035 [2024-12-07 05:46:40.153497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.153771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.153780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.035 qpair failed and we were unable to recover it. 00:31:37.035 [2024-12-07 05:46:40.154160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.154443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.154452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.035 qpair failed and we were unable to recover it. 00:31:37.035 [2024-12-07 05:46:40.154718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.155050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.155060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.035 qpair failed and we were unable to recover it. 00:31:37.035 [2024-12-07 05:46:40.155358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.155647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.155657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.035 qpair failed and we were unable to recover it. 00:31:37.035 [2024-12-07 05:46:40.155965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.156299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.156310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.035 qpair failed and we were unable to recover it. 00:31:37.035 [2024-12-07 05:46:40.156600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.156789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.156799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.035 qpair failed and we were unable to recover it. 00:31:37.035 [2024-12-07 05:46:40.157112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.157468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.157477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.035 qpair failed and we were unable to recover it. 00:31:37.035 [2024-12-07 05:46:40.157781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.158062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.158072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.035 qpair failed and we were unable to recover it. 00:31:37.035 [2024-12-07 05:46:40.158370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.158659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.158668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.035 qpair failed and we were unable to recover it. 00:31:37.035 [2024-12-07 05:46:40.158981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.159272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.159283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.035 qpair failed and we were unable to recover it. 00:31:37.035 [2024-12-07 05:46:40.159587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.159852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.159864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.035 qpair failed and we were unable to recover it. 00:31:37.035 [2024-12-07 05:46:40.160198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.160512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.160522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.035 qpair failed and we were unable to recover it. 00:31:37.035 [2024-12-07 05:46:40.160829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.161137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.035 [2024-12-07 05:46:40.161148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.035 qpair failed and we were unable to recover it. 00:31:37.036 [2024-12-07 05:46:40.161435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.161757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.161768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.036 qpair failed and we were unable to recover it. 00:31:37.036 [2024-12-07 05:46:40.161955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.162241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.162251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.036 qpair failed and we were unable to recover it. 00:31:37.036 [2024-12-07 05:46:40.162549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.162732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.162741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.036 qpair failed and we were unable to recover it. 00:31:37.036 [2024-12-07 05:46:40.162990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.163326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.163335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.036 qpair failed and we were unable to recover it. 00:31:37.036 [2024-12-07 05:46:40.163640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.163968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.163978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.036 qpair failed and we were unable to recover it. 00:31:37.036 [2024-12-07 05:46:40.164280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.164569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.164578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.036 qpair failed and we were unable to recover it. 00:31:37.036 [2024-12-07 05:46:40.164857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.165103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.165113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.036 qpair failed and we were unable to recover it. 00:31:37.036 [2024-12-07 05:46:40.165376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.165590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.165600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.036 qpair failed and we were unable to recover it. 00:31:37.036 [2024-12-07 05:46:40.165781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.165997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.166007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.036 qpair failed and we were unable to recover it. 00:31:37.036 [2024-12-07 05:46:40.166326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.166625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.166636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.036 qpair failed and we were unable to recover it. 00:31:37.036 [2024-12-07 05:46:40.166915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.167236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.167246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.036 qpair failed and we were unable to recover it. 00:31:37.036 [2024-12-07 05:46:40.167509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.167785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.167795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.036 qpair failed and we were unable to recover it. 00:31:37.036 [2024-12-07 05:46:40.168112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.168405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.168414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.036 qpair failed and we were unable to recover it. 00:31:37.036 [2024-12-07 05:46:40.168596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.168894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.168903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.036 qpair failed and we were unable to recover it. 00:31:37.036 [2024-12-07 05:46:40.169263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.169613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.169622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.036 qpair failed and we were unable to recover it. 00:31:37.036 [2024-12-07 05:46:40.169816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.170124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.170134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.036 qpair failed and we were unable to recover it. 00:31:37.036 [2024-12-07 05:46:40.170417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.170707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.170716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.036 qpair failed and we were unable to recover it. 00:31:37.036 [2024-12-07 05:46:40.171041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.171359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.171368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.036 qpair failed and we were unable to recover it. 00:31:37.036 [2024-12-07 05:46:40.171652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.171976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.171987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.036 qpair failed and we were unable to recover it. 00:31:37.036 [2024-12-07 05:46:40.172295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.172691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.172701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.036 qpair failed and we were unable to recover it. 00:31:37.036 [2024-12-07 05:46:40.173007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.173302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.173312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.036 qpair failed and we were unable to recover it. 00:31:37.036 [2024-12-07 05:46:40.173618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.173910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.173920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.036 qpair failed and we were unable to recover it. 00:31:37.036 [2024-12-07 05:46:40.174237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.174450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.174459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.036 qpair failed and we were unable to recover it. 00:31:37.036 [2024-12-07 05:46:40.174656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.174937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.174946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.036 qpair failed and we were unable to recover it. 00:31:37.036 [2024-12-07 05:46:40.175255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.175586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.175595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.036 qpair failed and we were unable to recover it. 00:31:37.036 [2024-12-07 05:46:40.175892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.176313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.176323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.036 qpair failed and we were unable to recover it. 00:31:37.036 [2024-12-07 05:46:40.176619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.176797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.176806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.036 qpair failed and we were unable to recover it. 00:31:37.036 [2024-12-07 05:46:40.176933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.177247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.177256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.036 qpair failed and we were unable to recover it. 00:31:37.036 [2024-12-07 05:46:40.177444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.177724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.036 [2024-12-07 05:46:40.177733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.037 qpair failed and we were unable to recover it. 00:31:37.037 [2024-12-07 05:46:40.177929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.178148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.178158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.037 qpair failed and we were unable to recover it. 00:31:37.037 [2024-12-07 05:46:40.178504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.178815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.178824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.037 qpair failed and we were unable to recover it. 00:31:37.037 [2024-12-07 05:46:40.179123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.179461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.179470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.037 qpair failed and we were unable to recover it. 00:31:37.037 [2024-12-07 05:46:40.179786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.180162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.180171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.037 qpair failed and we were unable to recover it. 00:31:37.037 [2024-12-07 05:46:40.180478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.180773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.180782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.037 qpair failed and we were unable to recover it. 00:31:37.037 [2024-12-07 05:46:40.181016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.181330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.181339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.037 qpair failed and we were unable to recover it. 00:31:37.037 [2024-12-07 05:46:40.181597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.181891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.181900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.037 qpair failed and we were unable to recover it. 00:31:37.037 [2024-12-07 05:46:40.182228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.182600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.182609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.037 qpair failed and we were unable to recover it. 00:31:37.037 [2024-12-07 05:46:40.182890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.183307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.183317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.037 qpair failed and we were unable to recover it. 00:31:37.037 [2024-12-07 05:46:40.183495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.183713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.183723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.037 qpair failed and we were unable to recover it. 00:31:37.037 [2024-12-07 05:46:40.184070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.184362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.184372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.037 qpair failed and we were unable to recover it. 00:31:37.037 [2024-12-07 05:46:40.184654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.184939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.184949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.037 qpair failed and we were unable to recover it. 00:31:37.037 [2024-12-07 05:46:40.185116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.185389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.185400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.037 qpair failed and we were unable to recover it. 00:31:37.037 [2024-12-07 05:46:40.185719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.185917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.185926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.037 qpair failed and we were unable to recover it. 00:31:37.037 [2024-12-07 05:46:40.186123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.186494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.186503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.037 qpair failed and we were unable to recover it. 00:31:37.037 [2024-12-07 05:46:40.186720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.187079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.187088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.037 qpair failed and we were unable to recover it. 00:31:37.037 [2024-12-07 05:46:40.187436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.187792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.187801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.037 qpair failed and we were unable to recover it. 00:31:37.037 [2024-12-07 05:46:40.188085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.188384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.188393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.037 qpair failed and we were unable to recover it. 00:31:37.037 [2024-12-07 05:46:40.188673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.188929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.188938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.037 qpair failed and we were unable to recover it. 00:31:37.037 [2024-12-07 05:46:40.189303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.189630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.189639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.037 qpair failed and we were unable to recover it. 00:31:37.037 [2024-12-07 05:46:40.189834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.190040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.190050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.037 qpair failed and we were unable to recover it. 00:31:37.037 [2024-12-07 05:46:40.190402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.190794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.190804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.037 qpair failed and we were unable to recover it. 00:31:37.037 [2024-12-07 05:46:40.191119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.191447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.191457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.037 qpair failed and we were unable to recover it. 00:31:37.037 [2024-12-07 05:46:40.191538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.191833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.191843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.037 qpair failed and we were unable to recover it. 00:31:37.037 [2024-12-07 05:46:40.191946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.037 [2024-12-07 05:46:40.192121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.192131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.038 qpair failed and we were unable to recover it. 00:31:37.038 [2024-12-07 05:46:40.192459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.192782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.192791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.038 qpair failed and we were unable to recover it. 00:31:37.038 [2024-12-07 05:46:40.193074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.193371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.193381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.038 qpair failed and we were unable to recover it. 00:31:37.038 [2024-12-07 05:46:40.193692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.193887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.193896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.038 qpair failed and we were unable to recover it. 00:31:37.038 [2024-12-07 05:46:40.194068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.194309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.194318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.038 qpair failed and we were unable to recover it. 00:31:37.038 [2024-12-07 05:46:40.194507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.194729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.194739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.038 qpair failed and we were unable to recover it. 00:31:37.038 [2024-12-07 05:46:40.195065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.195399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.195411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.038 qpair failed and we were unable to recover it. 00:31:37.038 [2024-12-07 05:46:40.195686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.195977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.195986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.038 qpair failed and we were unable to recover it. 00:31:37.038 [2024-12-07 05:46:40.196303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.196615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.196625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.038 qpair failed and we were unable to recover it. 00:31:37.038 [2024-12-07 05:46:40.196822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.197178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.197187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.038 qpair failed and we were unable to recover it. 00:31:37.038 [2024-12-07 05:46:40.197469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.197768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.197777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.038 qpair failed and we were unable to recover it. 00:31:37.038 [2024-12-07 05:46:40.198090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.198313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.198322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.038 qpair failed and we were unable to recover it. 00:31:37.038 [2024-12-07 05:46:40.198607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.198906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.198915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.038 qpair failed and we were unable to recover it. 00:31:37.038 [2024-12-07 05:46:40.199209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.199465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.199474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.038 qpair failed and we were unable to recover it. 00:31:37.038 [2024-12-07 05:46:40.199782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.199885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.199895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.038 qpair failed and we were unable to recover it. 00:31:37.038 [2024-12-07 05:46:40.200180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.200482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.200491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.038 qpair failed and we were unable to recover it. 00:31:37.038 [2024-12-07 05:46:40.200686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.200869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.200880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.038 qpair failed and we were unable to recover it. 00:31:37.038 [2024-12-07 05:46:40.201230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.201516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.201525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.038 qpair failed and we were unable to recover it. 00:31:37.038 [2024-12-07 05:46:40.201857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.202179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.202191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.038 qpair failed and we were unable to recover it. 00:31:37.038 [2024-12-07 05:46:40.202392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.202683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.202693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.038 qpair failed and we were unable to recover it. 00:31:37.038 [2024-12-07 05:46:40.203002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.203282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.203292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.038 qpair failed and we were unable to recover it. 00:31:37.038 [2024-12-07 05:46:40.203468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.203740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.203750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.038 qpair failed and we were unable to recover it. 00:31:37.038 [2024-12-07 05:46:40.204086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.204418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.204427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.038 qpair failed and we were unable to recover it. 00:31:37.038 [2024-12-07 05:46:40.204762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.204936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.204945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.038 qpair failed and we were unable to recover it. 00:31:37.038 [2024-12-07 05:46:40.205145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.205336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.205345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.038 qpair failed and we were unable to recover it. 00:31:37.038 [2024-12-07 05:46:40.205721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.205990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.206000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.038 qpair failed and we were unable to recover it. 00:31:37.038 [2024-12-07 05:46:40.206298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.206571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.206581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.038 qpair failed and we were unable to recover it. 00:31:37.038 [2024-12-07 05:46:40.206897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.207206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.207216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.038 qpair failed and we were unable to recover it. 00:31:37.038 [2024-12-07 05:46:40.207429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.207756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.207775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.038 qpair failed and we were unable to recover it. 00:31:37.038 [2024-12-07 05:46:40.207982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.038 [2024-12-07 05:46:40.208274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.208283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.039 qpair failed and we were unable to recover it. 00:31:37.039 [2024-12-07 05:46:40.208651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.208709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.208718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.039 qpair failed and we were unable to recover it. 00:31:37.039 [2024-12-07 05:46:40.209000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.209348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.209358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.039 qpair failed and we were unable to recover it. 00:31:37.039 [2024-12-07 05:46:40.209774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.209978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.209988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.039 qpair failed and we were unable to recover it. 00:31:37.039 [2024-12-07 05:46:40.210311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.210633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.210643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.039 qpair failed and we were unable to recover it. 00:31:37.039 [2024-12-07 05:46:40.210952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.211187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.211197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.039 qpair failed and we were unable to recover it. 00:31:37.039 [2024-12-07 05:46:40.211402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.211561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.211570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.039 qpair failed and we were unable to recover it. 00:31:37.039 [2024-12-07 05:46:40.211835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.212113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.212123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.039 qpair failed and we were unable to recover it. 00:31:37.039 [2024-12-07 05:46:40.212414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.212634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.212644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.039 qpair failed and we were unable to recover it. 00:31:37.039 [2024-12-07 05:46:40.212927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.213329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.213339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.039 qpair failed and we were unable to recover it. 00:31:37.039 [2024-12-07 05:46:40.213542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.213875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.213885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.039 qpair failed and we were unable to recover it. 00:31:37.039 [2024-12-07 05:46:40.214293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.214658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.214668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.039 qpair failed and we were unable to recover it. 00:31:37.039 [2024-12-07 05:46:40.214972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.215296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.215306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.039 qpair failed and we were unable to recover it. 00:31:37.039 [2024-12-07 05:46:40.215410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.215609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.215618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.039 qpair failed and we were unable to recover it. 00:31:37.039 [2024-12-07 05:46:40.215826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.216129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.216139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.039 qpair failed and we were unable to recover it. 00:31:37.039 [2024-12-07 05:46:40.216451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.216782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.216791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.039 qpair failed and we were unable to recover it. 00:31:37.039 [2024-12-07 05:46:40.217159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.217396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.217405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.039 qpair failed and we were unable to recover it. 00:31:37.039 [2024-12-07 05:46:40.217691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.217997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.218006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.039 qpair failed and we were unable to recover it. 00:31:37.039 [2024-12-07 05:46:40.218416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.218720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.218730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.039 qpair failed and we were unable to recover it. 00:31:37.039 [2024-12-07 05:46:40.219087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.219305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.219314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.039 qpair failed and we were unable to recover it. 00:31:37.039 [2024-12-07 05:46:40.219616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.219821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.219831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.039 qpair failed and we were unable to recover it. 00:31:37.039 [2024-12-07 05:46:40.220181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.220455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.220464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.039 qpair failed and we were unable to recover it. 00:31:37.039 [2024-12-07 05:46:40.220790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.221083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.221094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.039 qpair failed and we were unable to recover it. 00:31:37.039 [2024-12-07 05:46:40.221457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.221748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.221759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.039 qpair failed and we were unable to recover it. 00:31:37.039 [2024-12-07 05:46:40.221947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.222166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.222176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.039 qpair failed and we were unable to recover it. 00:31:37.039 [2024-12-07 05:46:40.222400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.222751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.222761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.039 qpair failed and we were unable to recover it. 00:31:37.039 [2024-12-07 05:46:40.223057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.223287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.223296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.039 qpair failed and we were unable to recover it. 00:31:37.039 [2024-12-07 05:46:40.223563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.039 [2024-12-07 05:46:40.223937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.223946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.040 qpair failed and we were unable to recover it. 00:31:37.040 [2024-12-07 05:46:40.224138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.224445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.224456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.040 qpair failed and we were unable to recover it. 00:31:37.040 [2024-12-07 05:46:40.224753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.225115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.225125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.040 qpair failed and we were unable to recover it. 00:31:37.040 [2024-12-07 05:46:40.225474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.225688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.225697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.040 qpair failed and we were unable to recover it. 00:31:37.040 [2024-12-07 05:46:40.226004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.226314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.226324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.040 qpair failed and we were unable to recover it. 00:31:37.040 [2024-12-07 05:46:40.226624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.226927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.226936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.040 qpair failed and we were unable to recover it. 00:31:37.040 [2024-12-07 05:46:40.227256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.227543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.227552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.040 qpair failed and we were unable to recover it. 00:31:37.040 [2024-12-07 05:46:40.227833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.228137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.228147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.040 qpair failed and we were unable to recover it. 00:31:37.040 [2024-12-07 05:46:40.228332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.228628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.228637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.040 qpair failed and we were unable to recover it. 00:31:37.040 [2024-12-07 05:46:40.228852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.229080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.229089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.040 qpair failed and we were unable to recover it. 00:31:37.040 [2024-12-07 05:46:40.229342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.229687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.229696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.040 qpair failed and we were unable to recover it. 00:31:37.040 [2024-12-07 05:46:40.230002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.230315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.230324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.040 qpair failed and we were unable to recover it. 00:31:37.040 [2024-12-07 05:46:40.230616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.230922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.230931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.040 qpair failed and we were unable to recover it. 00:31:37.040 [2024-12-07 05:46:40.231290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.231467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.231477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.040 qpair failed and we were unable to recover it. 00:31:37.040 [2024-12-07 05:46:40.231812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.232155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.232165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.040 qpair failed and we were unable to recover it. 00:31:37.040 [2024-12-07 05:46:40.232523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.232730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.232739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.040 qpair failed and we were unable to recover it. 00:31:37.040 [2024-12-07 05:46:40.233062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.233310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.233319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.040 qpair failed and we were unable to recover it. 00:31:37.040 [2024-12-07 05:46:40.233512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.233815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.233824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.040 qpair failed and we were unable to recover it. 00:31:37.040 [2024-12-07 05:46:40.234135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.234355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.234364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.040 qpair failed and we were unable to recover it. 00:31:37.040 [2024-12-07 05:46:40.234576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.234922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.234931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.040 qpair failed and we were unable to recover it. 00:31:37.040 [2024-12-07 05:46:40.235237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.235415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.235424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.040 qpair failed and we were unable to recover it. 00:31:37.040 [2024-12-07 05:46:40.235610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.235921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.235930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.040 qpair failed and we were unable to recover it. 00:31:37.040 [2024-12-07 05:46:40.236236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.236554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.236563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.040 qpair failed and we were unable to recover it. 00:31:37.040 [2024-12-07 05:46:40.236852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.237264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.237275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.040 qpair failed and we were unable to recover it. 00:31:37.040 [2024-12-07 05:46:40.237564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.237873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.237882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.040 qpair failed and we were unable to recover it. 00:31:37.040 [2024-12-07 05:46:40.238229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.238539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.238548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.040 qpair failed and we were unable to recover it. 00:31:37.040 [2024-12-07 05:46:40.238864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.239156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.239166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.040 qpair failed and we were unable to recover it. 00:31:37.040 [2024-12-07 05:46:40.239481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.239866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.040 [2024-12-07 05:46:40.239875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.040 qpair failed and we were unable to recover it. 00:31:37.040 [2024-12-07 05:46:40.240101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.240295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.240304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.041 qpair failed and we were unable to recover it. 00:31:37.041 [2024-12-07 05:46:40.240607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.240917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.240927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.041 qpair failed and we were unable to recover it. 00:31:37.041 [2024-12-07 05:46:40.241227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.241531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.241541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.041 qpair failed and we were unable to recover it. 00:31:37.041 [2024-12-07 05:46:40.241861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.242046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.242056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.041 qpair failed and we were unable to recover it. 00:31:37.041 [2024-12-07 05:46:40.242434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.242665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.242674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.041 qpair failed and we were unable to recover it. 00:31:37.041 [2024-12-07 05:46:40.242987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.243294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.243304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.041 qpair failed and we were unable to recover it. 00:31:37.041 [2024-12-07 05:46:40.243546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.243879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.243889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.041 qpair failed and we were unable to recover it. 00:31:37.041 [2024-12-07 05:46:40.244250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.244422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.244431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.041 qpair failed and we were unable to recover it. 00:31:37.041 [2024-12-07 05:46:40.244734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.245073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.245083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.041 qpair failed and we were unable to recover it. 00:31:37.041 [2024-12-07 05:46:40.245251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.245552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.245561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.041 qpair failed and we were unable to recover it. 00:31:37.041 [2024-12-07 05:46:40.245863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.246178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.246187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.041 qpair failed and we were unable to recover it. 00:31:37.041 [2024-12-07 05:46:40.246491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.246798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.246807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.041 qpair failed and we were unable to recover it. 00:31:37.041 [2024-12-07 05:46:40.247117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.247438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.247447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.041 qpair failed and we were unable to recover it. 00:31:37.041 [2024-12-07 05:46:40.247764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.248123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.248133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.041 qpair failed and we were unable to recover it. 00:31:37.041 [2024-12-07 05:46:40.248446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.248640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.248650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.041 qpair failed and we were unable to recover it. 00:31:37.041 [2024-12-07 05:46:40.248967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.249186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.249195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.041 qpair failed and we were unable to recover it. 00:31:37.041 [2024-12-07 05:46:40.249450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.249674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.249683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.041 qpair failed and we were unable to recover it. 00:31:37.041 [2024-12-07 05:46:40.249981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.250314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.250325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.041 qpair failed and we were unable to recover it. 00:31:37.041 [2024-12-07 05:46:40.250688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.250974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.041 [2024-12-07 05:46:40.250983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.041 qpair failed and we were unable to recover it. 00:31:37.041 [2024-12-07 05:46:40.251161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.251454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.251465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.313 qpair failed and we were unable to recover it. 00:31:37.313 [2024-12-07 05:46:40.251660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.251844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.251854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.313 qpair failed and we were unable to recover it. 00:31:37.313 [2024-12-07 05:46:40.252142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.252435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.252445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.313 qpair failed and we were unable to recover it. 00:31:37.313 [2024-12-07 05:46:40.252721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.252944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.252954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.313 qpair failed and we were unable to recover it. 00:31:37.313 [2024-12-07 05:46:40.253167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.253491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.253500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.313 qpair failed and we were unable to recover it. 00:31:37.313 [2024-12-07 05:46:40.253702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.254032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.254045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.313 qpair failed and we were unable to recover it. 00:31:37.313 [2024-12-07 05:46:40.254382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.254557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.254566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.313 qpair failed and we were unable to recover it. 00:31:37.313 [2024-12-07 05:46:40.254915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.255229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.255240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.313 qpair failed and we were unable to recover it. 00:31:37.313 [2024-12-07 05:46:40.255433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.255725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.255734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.313 qpair failed and we were unable to recover it. 00:31:37.313 [2024-12-07 05:46:40.256050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.256368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.256377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.313 qpair failed and we were unable to recover it. 00:31:37.313 [2024-12-07 05:46:40.256652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.256978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.256987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.313 qpair failed and we were unable to recover it. 00:31:37.313 [2024-12-07 05:46:40.257166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.257517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.257527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.313 qpair failed and we were unable to recover it. 00:31:37.313 [2024-12-07 05:46:40.257835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.258150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.258160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.313 qpair failed and we were unable to recover it. 00:31:37.313 [2024-12-07 05:46:40.258342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.258661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.258670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.313 qpair failed and we were unable to recover it. 00:31:37.313 [2024-12-07 05:46:40.258969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.259094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.259104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.313 qpair failed and we were unable to recover it. 00:31:37.313 [2024-12-07 05:46:40.259439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.259753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.259762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.313 qpair failed and we were unable to recover it. 00:31:37.313 [2024-12-07 05:46:40.260084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.260266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.260275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.313 qpair failed and we were unable to recover it. 00:31:37.313 [2024-12-07 05:46:40.260606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.260923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.260933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.313 qpair failed and we were unable to recover it. 00:31:37.313 [2024-12-07 05:46:40.261127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.261434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.261443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.313 qpair failed and we were unable to recover it. 00:31:37.313 [2024-12-07 05:46:40.261803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.262026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.313 [2024-12-07 05:46:40.262035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.313 qpair failed and we were unable to recover it. 00:31:37.313 [2024-12-07 05:46:40.262386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.262683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.262692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.314 qpair failed and we were unable to recover it. 00:31:37.314 [2024-12-07 05:46:40.262898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.263181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.263191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.314 qpair failed and we were unable to recover it. 00:31:37.314 [2024-12-07 05:46:40.263496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.263739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.263748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.314 qpair failed and we were unable to recover it. 00:31:37.314 [2024-12-07 05:46:40.264096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.264410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.264420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.314 qpair failed and we were unable to recover it. 00:31:37.314 [2024-12-07 05:46:40.264651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.264865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.264874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.314 qpair failed and we were unable to recover it. 00:31:37.314 [2024-12-07 05:46:40.265232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.265523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.265532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.314 qpair failed and we were unable to recover it. 00:31:37.314 [2024-12-07 05:46:40.265815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.266154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.266164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.314 qpair failed and we were unable to recover it. 00:31:37.314 [2024-12-07 05:46:40.266482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.266822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.266831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.314 qpair failed and we were unable to recover it. 00:31:37.314 [2024-12-07 05:46:40.267135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.267458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.267467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.314 qpair failed and we were unable to recover it. 00:31:37.314 [2024-12-07 05:46:40.267775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.268053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.268063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.314 qpair failed and we were unable to recover it. 00:31:37.314 [2024-12-07 05:46:40.268387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.268692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.268702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.314 qpair failed and we were unable to recover it. 00:31:37.314 [2024-12-07 05:46:40.268894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.269276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.269285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.314 qpair failed and we were unable to recover it. 00:31:37.314 [2024-12-07 05:46:40.269612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.269920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.269930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.314 qpair failed and we were unable to recover it. 00:31:37.314 [2024-12-07 05:46:40.270209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.270535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.270544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.314 qpair failed and we were unable to recover it. 00:31:37.314 [2024-12-07 05:46:40.270739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.271081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.271090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.314 qpair failed and we were unable to recover it. 00:31:37.314 [2024-12-07 05:46:40.271419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.271735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.271745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.314 qpair failed and we were unable to recover it. 00:31:37.314 [2024-12-07 05:46:40.271936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.272232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.272242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.314 qpair failed and we were unable to recover it. 00:31:37.314 [2024-12-07 05:46:40.272413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.272716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.272725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.314 qpair failed and we were unable to recover it. 00:31:37.314 [2024-12-07 05:46:40.272948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.273228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.273237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.314 qpair failed and we were unable to recover it. 00:31:37.314 [2024-12-07 05:46:40.273537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.273851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.273860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.314 qpair failed and we were unable to recover it. 00:31:37.314 [2024-12-07 05:46:40.274170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.274443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.274452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.314 qpair failed and we were unable to recover it. 00:31:37.314 [2024-12-07 05:46:40.274759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.275054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.275064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.314 qpair failed and we were unable to recover it. 00:31:37.314 [2024-12-07 05:46:40.275292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.275634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.275643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.314 qpair failed and we were unable to recover it. 00:31:37.314 [2024-12-07 05:46:40.275848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.276234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.276244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.314 qpair failed and we were unable to recover it. 00:31:37.314 [2024-12-07 05:46:40.276516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.276847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.276857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.314 qpair failed and we were unable to recover it. 00:31:37.314 [2024-12-07 05:46:40.277053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.277380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.277389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.314 qpair failed and we were unable to recover it. 00:31:37.314 [2024-12-07 05:46:40.277674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.314 [2024-12-07 05:46:40.277988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.277999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.315 qpair failed and we were unable to recover it. 00:31:37.315 [2024-12-07 05:46:40.278339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.278639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.278649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.315 qpair failed and we were unable to recover it. 00:31:37.315 [2024-12-07 05:46:40.278945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.279255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.279264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.315 qpair failed and we were unable to recover it. 00:31:37.315 [2024-12-07 05:46:40.279575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.279866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.279876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.315 qpair failed and we were unable to recover it. 00:31:37.315 [2024-12-07 05:46:40.280183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.280479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.280489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.315 qpair failed and we were unable to recover it. 00:31:37.315 [2024-12-07 05:46:40.280692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.280911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.280920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.315 qpair failed and we were unable to recover it. 00:31:37.315 [2024-12-07 05:46:40.281147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.281363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.281373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.315 qpair failed and we were unable to recover it. 00:31:37.315 [2024-12-07 05:46:40.281686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.282019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.282029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.315 qpair failed and we were unable to recover it. 00:31:37.315 [2024-12-07 05:46:40.282401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.282693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.282702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.315 qpair failed and we were unable to recover it. 00:31:37.315 [2024-12-07 05:46:40.282988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.283350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.283359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.315 qpair failed and we were unable to recover it. 00:31:37.315 [2024-12-07 05:46:40.283672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.284026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.284039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.315 qpair failed and we were unable to recover it. 00:31:37.315 [2024-12-07 05:46:40.284364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.284674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.284684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.315 qpair failed and we were unable to recover it. 00:31:37.315 [2024-12-07 05:46:40.284967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.285275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.285285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.315 qpair failed and we were unable to recover it. 00:31:37.315 [2024-12-07 05:46:40.285589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.285918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.285927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.315 qpair failed and we were unable to recover it. 00:31:37.315 [2024-12-07 05:46:40.286311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.286599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.286608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.315 qpair failed and we were unable to recover it. 00:31:37.315 [2024-12-07 05:46:40.286802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.287081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.287091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.315 qpair failed and we were unable to recover it. 00:31:37.315 [2024-12-07 05:46:40.287438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.287738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.287749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.315 qpair failed and we were unable to recover it. 00:31:37.315 [2024-12-07 05:46:40.288090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.288476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.288485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.315 qpair failed and we were unable to recover it. 00:31:37.315 [2024-12-07 05:46:40.288793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.289068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.289078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.315 qpair failed and we were unable to recover it. 00:31:37.315 [2024-12-07 05:46:40.289373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.289649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.289658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.315 qpair failed and we were unable to recover it. 00:31:37.315 [2024-12-07 05:46:40.289934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.290220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.290230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.315 qpair failed and we were unable to recover it. 00:31:37.315 [2024-12-07 05:46:40.290544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.290821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.290831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.315 qpair failed and we were unable to recover it. 00:31:37.315 [2024-12-07 05:46:40.291111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.291427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.291436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.315 qpair failed and we were unable to recover it. 00:31:37.315 [2024-12-07 05:46:40.291725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.292022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.292033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.315 qpair failed and we were unable to recover it. 00:31:37.315 [2024-12-07 05:46:40.292389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.292704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.292714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.315 qpair failed and we were unable to recover it. 00:31:37.315 [2024-12-07 05:46:40.293020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.293360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.293369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.315 qpair failed and we were unable to recover it. 00:31:37.315 [2024-12-07 05:46:40.293657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.293974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.293983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.315 qpair failed and we were unable to recover it. 00:31:37.315 [2024-12-07 05:46:40.294290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.294570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.294580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.315 qpair failed and we were unable to recover it. 00:31:37.315 [2024-12-07 05:46:40.294868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.295185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.315 [2024-12-07 05:46:40.295195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.315 qpair failed and we were unable to recover it. 00:31:37.316 [2024-12-07 05:46:40.295484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.295816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.295825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.316 qpair failed and we were unable to recover it. 00:31:37.316 [2024-12-07 05:46:40.296129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.296334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.296343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.316 qpair failed and we were unable to recover it. 00:31:37.316 [2024-12-07 05:46:40.296692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.296986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.296996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.316 qpair failed and we were unable to recover it. 00:31:37.316 [2024-12-07 05:46:40.297336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.297615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.297625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.316 qpair failed and we were unable to recover it. 00:31:37.316 [2024-12-07 05:46:40.297951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.298240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.298251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.316 qpair failed and we were unable to recover it. 00:31:37.316 [2024-12-07 05:46:40.298519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.298825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.298835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.316 qpair failed and we were unable to recover it. 00:31:37.316 [2024-12-07 05:46:40.299146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.299451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.299461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.316 qpair failed and we were unable to recover it. 00:31:37.316 [2024-12-07 05:46:40.299773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.300073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.300083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.316 qpair failed and we were unable to recover it. 00:31:37.316 [2024-12-07 05:46:40.300404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.300683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.300692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.316 qpair failed and we were unable to recover it. 00:31:37.316 [2024-12-07 05:46:40.300910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.301285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.301295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.316 qpair failed and we were unable to recover it. 00:31:37.316 [2024-12-07 05:46:40.301489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.301822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.301831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.316 qpair failed and we were unable to recover it. 00:31:37.316 [2024-12-07 05:46:40.302019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.302256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.302265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.316 qpair failed and we were unable to recover it. 00:31:37.316 [2024-12-07 05:46:40.302538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.302855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.302865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.316 qpair failed and we were unable to recover it. 00:31:37.316 [2024-12-07 05:46:40.303214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.303426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.303435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.316 qpair failed and we were unable to recover it. 00:31:37.316 [2024-12-07 05:46:40.303766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.303969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.303979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.316 qpair failed and we were unable to recover it. 00:31:37.316 [2024-12-07 05:46:40.304312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.304602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.304612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.316 qpair failed and we were unable to recover it. 00:31:37.316 [2024-12-07 05:46:40.304922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.305144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.305154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.316 qpair failed and we were unable to recover it. 00:31:37.316 [2024-12-07 05:46:40.305500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.305672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.305681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.316 qpair failed and we were unable to recover it. 00:31:37.316 [2024-12-07 05:46:40.306015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.306332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.306341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.316 qpair failed and we were unable to recover it. 00:31:37.316 [2024-12-07 05:46:40.306530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.306903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.306912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.316 qpair failed and we were unable to recover it. 00:31:37.316 [2024-12-07 05:46:40.307134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.307522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.307531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.316 qpair failed and we were unable to recover it. 00:31:37.316 [2024-12-07 05:46:40.307861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.308034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.308046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.316 qpair failed and we were unable to recover it. 00:31:37.316 [2024-12-07 05:46:40.308253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.308517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.308529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.316 qpair failed and we were unable to recover it. 00:31:37.316 [2024-12-07 05:46:40.308716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.308778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.308787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.316 qpair failed and we were unable to recover it. 00:31:37.316 [2024-12-07 05:46:40.308957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.309255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.309264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.316 qpair failed and we were unable to recover it. 00:31:37.316 [2024-12-07 05:46:40.309553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.309851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.309861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.316 qpair failed and we were unable to recover it. 00:31:37.316 [2024-12-07 05:46:40.310146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.310473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.310482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.316 qpair failed and we were unable to recover it. 00:31:37.316 [2024-12-07 05:46:40.310768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.311054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.316 [2024-12-07 05:46:40.311064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.316 qpair failed and we were unable to recover it. 00:31:37.316 [2024-12-07 05:46:40.311453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.311642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.311651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.317 qpair failed and we were unable to recover it. 00:31:37.317 [2024-12-07 05:46:40.311939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.312258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.312268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.317 qpair failed and we were unable to recover it. 00:31:37.317 [2024-12-07 05:46:40.312457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.312764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.312774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.317 qpair failed and we were unable to recover it. 00:31:37.317 [2024-12-07 05:46:40.313102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.313426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.313436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.317 qpair failed and we were unable to recover it. 00:31:37.317 [2024-12-07 05:46:40.313628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.313819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.313831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.317 qpair failed and we were unable to recover it. 00:31:37.317 [2024-12-07 05:46:40.314057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.314289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.314299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.317 qpair failed and we were unable to recover it. 00:31:37.317 [2024-12-07 05:46:40.314640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.314967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.314977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.317 qpair failed and we were unable to recover it. 00:31:37.317 [2024-12-07 05:46:40.315294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.315574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.315583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.317 qpair failed and we were unable to recover it. 00:31:37.317 [2024-12-07 05:46:40.315902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.316180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.316190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.317 qpair failed and we were unable to recover it. 00:31:37.317 [2024-12-07 05:46:40.316508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.316821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.316832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.317 qpair failed and we were unable to recover it. 00:31:37.317 [2024-12-07 05:46:40.316994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.317288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.317298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.317 qpair failed and we were unable to recover it. 00:31:37.317 [2024-12-07 05:46:40.317580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.317762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.317771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.317 qpair failed and we were unable to recover it. 00:31:37.317 [2024-12-07 05:46:40.318078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.318293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.318303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.317 qpair failed and we were unable to recover it. 00:31:37.317 [2024-12-07 05:46:40.318623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.319009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.319025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.317 qpair failed and we were unable to recover it. 00:31:37.317 [2024-12-07 05:46:40.319310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.319511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.319520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.317 qpair failed and we were unable to recover it. 00:31:37.317 [2024-12-07 05:46:40.319830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.320151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.320161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.317 qpair failed and we were unable to recover it. 00:31:37.317 [2024-12-07 05:46:40.320341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.320662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.320672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.317 qpair failed and we were unable to recover it. 00:31:37.317 [2024-12-07 05:46:40.320978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.321188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.321197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.317 qpair failed and we were unable to recover it. 00:31:37.317 [2024-12-07 05:46:40.321526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.321864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.321874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.317 qpair failed and we were unable to recover it. 00:31:37.317 [2024-12-07 05:46:40.322200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.322479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.322489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.317 qpair failed and we were unable to recover it. 00:31:37.317 [2024-12-07 05:46:40.322800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.323117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.323127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.317 qpair failed and we were unable to recover it. 00:31:37.317 [2024-12-07 05:46:40.323477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.323777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.323787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.317 qpair failed and we were unable to recover it. 00:31:37.317 [2024-12-07 05:46:40.324097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.324409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.324424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.317 qpair failed and we were unable to recover it. 00:31:37.317 [2024-12-07 05:46:40.324751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.324916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.324926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.317 qpair failed and we were unable to recover it. 00:31:37.317 [2024-12-07 05:46:40.325098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.325394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.325404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.317 qpair failed and we were unable to recover it. 00:31:37.317 [2024-12-07 05:46:40.325712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.325992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.317 [2024-12-07 05:46:40.326001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.317 qpair failed and we were unable to recover it. 00:31:37.317 [2024-12-07 05:46:40.326198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.326496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.326506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.318 qpair failed and we were unable to recover it. 00:31:37.318 [2024-12-07 05:46:40.326818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.327111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.327122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.318 qpair failed and we were unable to recover it. 00:31:37.318 [2024-12-07 05:46:40.327429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.327729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.327738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.318 qpair failed and we were unable to recover it. 00:31:37.318 [2024-12-07 05:46:40.328041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.328326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.328335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.318 qpair failed and we were unable to recover it. 00:31:37.318 [2024-12-07 05:46:40.328700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.328967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.328976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.318 qpair failed and we were unable to recover it. 00:31:37.318 [2024-12-07 05:46:40.329168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.329225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.329235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.318 qpair failed and we were unable to recover it. 00:31:37.318 [2024-12-07 05:46:40.329527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.329815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.329825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.318 qpair failed and we were unable to recover it. 00:31:37.318 [2024-12-07 05:46:40.330125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.330319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.330328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.318 qpair failed and we were unable to recover it. 00:31:37.318 [2024-12-07 05:46:40.330607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.330945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.330955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.318 qpair failed and we were unable to recover it. 00:31:37.318 [2024-12-07 05:46:40.331356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.331653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.331663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.318 qpair failed and we were unable to recover it. 00:31:37.318 [2024-12-07 05:46:40.331956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.332240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.332250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.318 qpair failed and we were unable to recover it. 00:31:37.318 [2024-12-07 05:46:40.332556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.332885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.332894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.318 qpair failed and we were unable to recover it. 00:31:37.318 [2024-12-07 05:46:40.333171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.333532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.333541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.318 qpair failed and we were unable to recover it. 00:31:37.318 [2024-12-07 05:46:40.333841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.334136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.334147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.318 qpair failed and we were unable to recover it. 00:31:37.318 [2024-12-07 05:46:40.334442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.334742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.334751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.318 qpair failed and we were unable to recover it. 00:31:37.318 [2024-12-07 05:46:40.335061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.335448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.335457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.318 qpair failed and we were unable to recover it. 00:31:37.318 [2024-12-07 05:46:40.335761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.336079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.336089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.318 qpair failed and we were unable to recover it. 00:31:37.318 [2024-12-07 05:46:40.336316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.336659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.336669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.318 qpair failed and we were unable to recover it. 00:31:37.318 [2024-12-07 05:46:40.336870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.337186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.337196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.318 qpair failed and we were unable to recover it. 00:31:37.318 [2024-12-07 05:46:40.337578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.337884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.337897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.318 qpair failed and we were unable to recover it. 00:31:37.318 [2024-12-07 05:46:40.338122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.338392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.338401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.318 qpair failed and we were unable to recover it. 00:31:37.318 [2024-12-07 05:46:40.338701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.339017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.339027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.318 qpair failed and we were unable to recover it. 00:31:37.318 [2024-12-07 05:46:40.339341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.339678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.339687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.318 qpair failed and we were unable to recover it. 00:31:37.318 [2024-12-07 05:46:40.339981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.340312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.340322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.318 qpair failed and we were unable to recover it. 00:31:37.318 [2024-12-07 05:46:40.340627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.340977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.340986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.318 qpair failed and we were unable to recover it. 00:31:37.318 [2024-12-07 05:46:40.341281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.341487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.341496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.318 qpair failed and we were unable to recover it. 00:31:37.318 [2024-12-07 05:46:40.341826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.342118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.318 [2024-12-07 05:46:40.342128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.318 qpair failed and we were unable to recover it. 00:31:37.319 [2024-12-07 05:46:40.342462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.342754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.342763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.319 qpair failed and we were unable to recover it. 00:31:37.319 [2024-12-07 05:46:40.343107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.343413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.343423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.319 qpair failed and we were unable to recover it. 00:31:37.319 [2024-12-07 05:46:40.343705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.344022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.344034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.319 qpair failed and we were unable to recover it. 00:31:37.319 [2024-12-07 05:46:40.344322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.344638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.344649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.319 qpair failed and we were unable to recover it. 00:31:37.319 [2024-12-07 05:46:40.345047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.345362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.345371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.319 qpair failed and we were unable to recover it. 00:31:37.319 [2024-12-07 05:46:40.345668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.346015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.346025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.319 qpair failed and we were unable to recover it. 00:31:37.319 [2024-12-07 05:46:40.346203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.346537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.346546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.319 qpair failed and we were unable to recover it. 00:31:37.319 [2024-12-07 05:46:40.346866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.347201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.347211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.319 qpair failed and we were unable to recover it. 00:31:37.319 [2024-12-07 05:46:40.347558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.347873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.347882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.319 qpair failed and we were unable to recover it. 00:31:37.319 [2024-12-07 05:46:40.348178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.348407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.348417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.319 qpair failed and we were unable to recover it. 00:31:37.319 [2024-12-07 05:46:40.348722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.349039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.349049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.319 qpair failed and we were unable to recover it. 00:31:37.319 [2024-12-07 05:46:40.349344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.349629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.349638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.319 qpair failed and we were unable to recover it. 00:31:37.319 [2024-12-07 05:46:40.349963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.350330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.350340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.319 qpair failed and we were unable to recover it. 00:31:37.319 [2024-12-07 05:46:40.350631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.350964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.350974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.319 qpair failed and we were unable to recover it. 00:31:37.319 [2024-12-07 05:46:40.351159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.351366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.351376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.319 qpair failed and we were unable to recover it. 00:31:37.319 [2024-12-07 05:46:40.351646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.351855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.351865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.319 qpair failed and we were unable to recover it. 00:31:37.319 [2024-12-07 05:46:40.352070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.352271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.352280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.319 qpair failed and we were unable to recover it. 00:31:37.319 [2024-12-07 05:46:40.352587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.352909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.352919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.319 qpair failed and we were unable to recover it. 00:31:37.319 [2024-12-07 05:46:40.353224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.353559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.353569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.319 qpair failed and we were unable to recover it. 00:31:37.319 [2024-12-07 05:46:40.353886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.354282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.354292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.319 qpair failed and we were unable to recover it. 00:31:37.319 [2024-12-07 05:46:40.354578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.354869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.354878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.319 qpair failed and we were unable to recover it. 00:31:37.319 [2024-12-07 05:46:40.355076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.355469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.355479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.319 qpair failed and we were unable to recover it. 00:31:37.319 [2024-12-07 05:46:40.355824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.355963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.355973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.319 qpair failed and we were unable to recover it. 00:31:37.319 [2024-12-07 05:46:40.356276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.356570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.356580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.319 qpair failed and we were unable to recover it. 00:31:37.319 [2024-12-07 05:46:40.356903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.357171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.357181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.319 qpair failed and we were unable to recover it. 00:31:37.319 [2024-12-07 05:46:40.357506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.357695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.357705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.319 qpair failed and we were unable to recover it. 00:31:37.319 [2024-12-07 05:46:40.358050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.358354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.358364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.319 qpair failed and we were unable to recover it. 00:31:37.319 [2024-12-07 05:46:40.358687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.319 [2024-12-07 05:46:40.358867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.358876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.320 qpair failed and we were unable to recover it. 00:31:37.320 [2024-12-07 05:46:40.359243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.359441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.359450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.320 qpair failed and we were unable to recover it. 00:31:37.320 [2024-12-07 05:46:40.359763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.359939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.359949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.320 qpair failed and we were unable to recover it. 00:31:37.320 [2024-12-07 05:46:40.360295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.360611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.360621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.320 qpair failed and we were unable to recover it. 00:31:37.320 [2024-12-07 05:46:40.360934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.361254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.361264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.320 qpair failed and we were unable to recover it. 00:31:37.320 [2024-12-07 05:46:40.361542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.361867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.361878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.320 qpair failed and we were unable to recover it. 00:31:37.320 [2024-12-07 05:46:40.362148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.362443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.362454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.320 qpair failed and we were unable to recover it. 00:31:37.320 [2024-12-07 05:46:40.362650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.362951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.362961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.320 qpair failed and we were unable to recover it. 00:31:37.320 [2024-12-07 05:46:40.363266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.363558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.363567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.320 qpair failed and we were unable to recover it. 00:31:37.320 [2024-12-07 05:46:40.363859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.364179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.364189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.320 qpair failed and we were unable to recover it. 00:31:37.320 [2024-12-07 05:46:40.364493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.364789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.364798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.320 qpair failed and we were unable to recover it. 00:31:37.320 [2024-12-07 05:46:40.365109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.365417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.365426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.320 qpair failed and we were unable to recover it. 00:31:37.320 [2024-12-07 05:46:40.365624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.365976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.365985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.320 qpair failed and we were unable to recover it. 00:31:37.320 [2024-12-07 05:46:40.366294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.366578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.366587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.320 qpair failed and we were unable to recover it. 00:31:37.320 [2024-12-07 05:46:40.366907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.367305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.367314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.320 qpair failed and we were unable to recover it. 00:31:37.320 [2024-12-07 05:46:40.367611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.367949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.367959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.320 qpair failed and we were unable to recover it. 00:31:37.320 [2024-12-07 05:46:40.368246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.368564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.368579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.320 qpair failed and we were unable to recover it. 00:31:37.320 [2024-12-07 05:46:40.368910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.369082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.369093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.320 qpair failed and we were unable to recover it. 00:31:37.320 [2024-12-07 05:46:40.369466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.369763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.369773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.320 qpair failed and we were unable to recover it. 00:31:37.320 [2024-12-07 05:46:40.370053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.370352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.370362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.320 qpair failed and we were unable to recover it. 00:31:37.320 [2024-12-07 05:46:40.370526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.370805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.370815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.320 qpair failed and we were unable to recover it. 00:31:37.320 [2024-12-07 05:46:40.370997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.371333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.371343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.320 qpair failed and we were unable to recover it. 00:31:37.320 [2024-12-07 05:46:40.371611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.371965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.371974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.320 qpair failed and we were unable to recover it. 00:31:37.320 [2024-12-07 05:46:40.372317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.372628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.372647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.320 qpair failed and we were unable to recover it. 00:31:37.320 [2024-12-07 05:46:40.372977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.373309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.373320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.320 qpair failed and we were unable to recover it. 00:31:37.320 [2024-12-07 05:46:40.373603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.373802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.373813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.320 qpair failed and we were unable to recover it. 00:31:37.320 [2024-12-07 05:46:40.374132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.374424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.374434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.320 qpair failed and we were unable to recover it. 00:31:37.320 [2024-12-07 05:46:40.374627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.374988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.374998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.320 qpair failed and we were unable to recover it. 00:31:37.320 [2024-12-07 05:46:40.375304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.375603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.320 [2024-12-07 05:46:40.375613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.320 qpair failed and we were unable to recover it. 00:31:37.320 [2024-12-07 05:46:40.375901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.376180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.376190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.321 qpair failed and we were unable to recover it. 00:31:37.321 [2024-12-07 05:46:40.376402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.376678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.376687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.321 qpair failed and we were unable to recover it. 00:31:37.321 [2024-12-07 05:46:40.376987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.377307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.377318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.321 qpair failed and we were unable to recover it. 00:31:37.321 [2024-12-07 05:46:40.377630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.377958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.377968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.321 qpair failed and we were unable to recover it. 00:31:37.321 [2024-12-07 05:46:40.378275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.378568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.378577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.321 qpair failed and we were unable to recover it. 00:31:37.321 [2024-12-07 05:46:40.378844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.379018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.379030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.321 qpair failed and we were unable to recover it. 00:31:37.321 [2024-12-07 05:46:40.379219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.379480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.379490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.321 qpair failed and we were unable to recover it. 00:31:37.321 [2024-12-07 05:46:40.379774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.380093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.380103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.321 qpair failed and we were unable to recover it. 00:31:37.321 [2024-12-07 05:46:40.380297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.380655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.380665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.321 qpair failed and we were unable to recover it. 00:31:37.321 [2024-12-07 05:46:40.380970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.381174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.381184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.321 qpair failed and we were unable to recover it. 00:31:37.321 [2024-12-07 05:46:40.381497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.381808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.381818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.321 qpair failed and we were unable to recover it. 00:31:37.321 [2024-12-07 05:46:40.382046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.382262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.382272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.321 qpair failed and we were unable to recover it. 00:31:37.321 [2024-12-07 05:46:40.382473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.382740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.382749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.321 qpair failed and we were unable to recover it. 00:31:37.321 [2024-12-07 05:46:40.383056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.383365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.383375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.321 qpair failed and we were unable to recover it. 00:31:37.321 [2024-12-07 05:46:40.383669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.383948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.383957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.321 qpair failed and we were unable to recover it. 00:31:37.321 [2024-12-07 05:46:40.384238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.384557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.384567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.321 qpair failed and we were unable to recover it. 00:31:37.321 [2024-12-07 05:46:40.384770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.385086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.385096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.321 qpair failed and we were unable to recover it. 00:31:37.321 [2024-12-07 05:46:40.385284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.385637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.385646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.321 qpair failed and we were unable to recover it. 00:31:37.321 [2024-12-07 05:46:40.385836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.386177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.386187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.321 qpair failed and we were unable to recover it. 00:31:37.321 [2024-12-07 05:46:40.386474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.386630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.386642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.321 qpair failed and we were unable to recover it. 00:31:37.321 [2024-12-07 05:46:40.386946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.387210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.387221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.321 qpair failed and we were unable to recover it. 00:31:37.321 [2024-12-07 05:46:40.387392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.387732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.387742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.321 qpair failed and we were unable to recover it. 00:31:37.321 [2024-12-07 05:46:40.388059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.388357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.388367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.321 qpair failed and we were unable to recover it. 00:31:37.321 [2024-12-07 05:46:40.388632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.388966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.388976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.321 qpair failed and we were unable to recover it. 00:31:37.321 [2024-12-07 05:46:40.389260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.389571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.389580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.321 qpair failed and we were unable to recover it. 00:31:37.321 [2024-12-07 05:46:40.389860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.390156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.390166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.321 qpair failed and we were unable to recover it. 00:31:37.321 [2024-12-07 05:46:40.390531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.390845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.390854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.321 qpair failed and we were unable to recover it. 00:31:37.321 [2024-12-07 05:46:40.391059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.391356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.391365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.321 qpair failed and we were unable to recover it. 00:31:37.321 [2024-12-07 05:46:40.391676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.391990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.321 [2024-12-07 05:46:40.392002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.322 qpair failed and we were unable to recover it. 00:31:37.322 [2024-12-07 05:46:40.392317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.392609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.392619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.322 qpair failed and we were unable to recover it. 00:31:37.322 [2024-12-07 05:46:40.392912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.393188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.393207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.322 qpair failed and we were unable to recover it. 00:31:37.322 [2024-12-07 05:46:40.393558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.393747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.393757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.322 qpair failed and we were unable to recover it. 00:31:37.322 [2024-12-07 05:46:40.394071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.394899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.394910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.322 qpair failed and we were unable to recover it. 00:31:37.322 [2024-12-07 05:46:40.395113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.395498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.395509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.322 qpair failed and we were unable to recover it. 00:31:37.322 [2024-12-07 05:46:40.395680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.396044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.396055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.322 qpair failed and we were unable to recover it. 00:31:37.322 [2024-12-07 05:46:40.396376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.396688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.396698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.322 qpair failed and we were unable to recover it. 00:31:37.322 [2024-12-07 05:46:40.396948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.397250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.397260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.322 qpair failed and we were unable to recover it. 00:31:37.322 [2024-12-07 05:46:40.397585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.397876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.397886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.322 qpair failed and we were unable to recover it. 00:31:37.322 [2024-12-07 05:46:40.398180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.398464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.398474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.322 qpair failed and we were unable to recover it. 00:31:37.322 [2024-12-07 05:46:40.398793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.399123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.399133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.322 qpair failed and we were unable to recover it. 00:31:37.322 [2024-12-07 05:46:40.399393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.399720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.399730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.322 qpair failed and we were unable to recover it. 00:31:37.322 [2024-12-07 05:46:40.400033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.400355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.400364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.322 qpair failed and we were unable to recover it. 00:31:37.322 [2024-12-07 05:46:40.400667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.400966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.400975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.322 qpair failed and we were unable to recover it. 00:31:37.322 [2024-12-07 05:46:40.401265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.401594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.401604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.322 qpair failed and we were unable to recover it. 00:31:37.322 [2024-12-07 05:46:40.401971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.402257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.402266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.322 qpair failed and we were unable to recover it. 00:31:37.322 [2024-12-07 05:46:40.402570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.402886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.402896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.322 qpair failed and we were unable to recover it. 00:31:37.322 [2024-12-07 05:46:40.403070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.403306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.403315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.322 qpair failed and we were unable to recover it. 00:31:37.322 [2024-12-07 05:46:40.403627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.403808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.403819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.322 qpair failed and we were unable to recover it. 00:31:37.322 [2024-12-07 05:46:40.404005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.404317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.404328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.322 qpair failed and we were unable to recover it. 00:31:37.322 [2024-12-07 05:46:40.404618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.404931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.404940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.322 qpair failed and we were unable to recover it. 00:31:37.322 [2024-12-07 05:46:40.405236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.405547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.405556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.322 qpair failed and we were unable to recover it. 00:31:37.322 [2024-12-07 05:46:40.405866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.406190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.322 [2024-12-07 05:46:40.406200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.322 qpair failed and we were unable to recover it. 00:31:37.322 [2024-12-07 05:46:40.406539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.406820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.406830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.323 qpair failed and we were unable to recover it. 00:31:37.323 [2024-12-07 05:46:40.407197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.407467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.407476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.323 qpair failed and we were unable to recover it. 00:31:37.323 [2024-12-07 05:46:40.407849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.408156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.408166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.323 qpair failed and we were unable to recover it. 00:31:37.323 [2024-12-07 05:46:40.408481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.408783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.408793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.323 qpair failed and we were unable to recover it. 00:31:37.323 [2024-12-07 05:46:40.409174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.409486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.409495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.323 qpair failed and we were unable to recover it. 00:31:37.323 [2024-12-07 05:46:40.409805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.409995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.410005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.323 qpair failed and we were unable to recover it. 00:31:37.323 [2024-12-07 05:46:40.410337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.410650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.410660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.323 qpair failed and we were unable to recover it. 00:31:37.323 [2024-12-07 05:46:40.410966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.411268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.411278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.323 qpair failed and we were unable to recover it. 00:31:37.323 [2024-12-07 05:46:40.411574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.411917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.411927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.323 qpair failed and we were unable to recover it. 00:31:37.323 [2024-12-07 05:46:40.412250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.412562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.412572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.323 qpair failed and we were unable to recover it. 00:31:37.323 [2024-12-07 05:46:40.412708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.412980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.412991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.323 qpair failed and we were unable to recover it. 00:31:37.323 [2024-12-07 05:46:40.413174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.413543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.413554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.323 qpair failed and we were unable to recover it. 00:31:37.323 [2024-12-07 05:46:40.413854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.414173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.414183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.323 qpair failed and we were unable to recover it. 00:31:37.323 [2024-12-07 05:46:40.414500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.414830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.414839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.323 qpair failed and we were unable to recover it. 00:31:37.323 [2024-12-07 05:46:40.415205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.415515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.415525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.323 qpair failed and we were unable to recover it. 00:31:37.323 [2024-12-07 05:46:40.415882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.416174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.416184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.323 qpair failed and we were unable to recover it. 00:31:37.323 [2024-12-07 05:46:40.416468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.416746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.416755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.323 qpair failed and we were unable to recover it. 00:31:37.323 [2024-12-07 05:46:40.417064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.417353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.417363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.323 qpair failed and we were unable to recover it. 00:31:37.323 [2024-12-07 05:46:40.417645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.417970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.417979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.323 qpair failed and we were unable to recover it. 00:31:37.323 [2024-12-07 05:46:40.418285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.418584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.418593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.323 qpair failed and we were unable to recover it. 00:31:37.323 [2024-12-07 05:46:40.418898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.419215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.419225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.323 qpair failed and we were unable to recover it. 00:31:37.323 [2024-12-07 05:46:40.419480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.419770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.419779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.323 qpair failed and we were unable to recover it. 00:31:37.323 [2024-12-07 05:46:40.420052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.420302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.420311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.323 qpair failed and we were unable to recover it. 00:31:37.323 [2024-12-07 05:46:40.420630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.420914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.420923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.323 qpair failed and we were unable to recover it. 00:31:37.323 [2024-12-07 05:46:40.421219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.421539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.421549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.323 qpair failed and we were unable to recover it. 00:31:37.323 [2024-12-07 05:46:40.421836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.422000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.422016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.323 qpair failed and we were unable to recover it. 00:31:37.323 [2024-12-07 05:46:40.422317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.422608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.422618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.323 qpair failed and we were unable to recover it. 00:31:37.323 [2024-12-07 05:46:40.422800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.423131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.423144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.323 qpair failed and we were unable to recover it. 00:31:37.323 [2024-12-07 05:46:40.423452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.423765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.323 [2024-12-07 05:46:40.423776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.323 qpair failed and we were unable to recover it. 00:31:37.324 [2024-12-07 05:46:40.424088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.424376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.424386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.324 qpair failed and we were unable to recover it. 00:31:37.324 [2024-12-07 05:46:40.424590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.424870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.424880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.324 qpair failed and we were unable to recover it. 00:31:37.324 [2024-12-07 05:46:40.425261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.425421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.425432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.324 qpair failed and we were unable to recover it. 00:31:37.324 [2024-12-07 05:46:40.425733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.426038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.426049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.324 qpair failed and we were unable to recover it. 00:31:37.324 [2024-12-07 05:46:40.426436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.426765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.426774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.324 qpair failed and we were unable to recover it. 00:31:37.324 [2024-12-07 05:46:40.427068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.427364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.427373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.324 qpair failed and we were unable to recover it. 00:31:37.324 [2024-12-07 05:46:40.427705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.427903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.427913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.324 qpair failed and we were unable to recover it. 00:31:37.324 [2024-12-07 05:46:40.428192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.428519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.428528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.324 qpair failed and we were unable to recover it. 00:31:37.324 [2024-12-07 05:46:40.428724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.428999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.429008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.324 qpair failed and we were unable to recover it. 00:31:37.324 [2024-12-07 05:46:40.429297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.429607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.429617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.324 qpair failed and we were unable to recover it. 00:31:37.324 [2024-12-07 05:46:40.429917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.430143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.430152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.324 qpair failed and we were unable to recover it. 00:31:37.324 [2024-12-07 05:46:40.430427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.430649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.430658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.324 qpair failed and we were unable to recover it. 00:31:37.324 [2024-12-07 05:46:40.430962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.431282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.431292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.324 qpair failed and we were unable to recover it. 00:31:37.324 [2024-12-07 05:46:40.431574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.431891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.431900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.324 qpair failed and we were unable to recover it. 00:31:37.324 [2024-12-07 05:46:40.432199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.432509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.432519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.324 qpair failed and we were unable to recover it. 00:31:37.324 [2024-12-07 05:46:40.432828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.433162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.433171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.324 qpair failed and we were unable to recover it. 00:31:37.324 [2024-12-07 05:46:40.433443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.433766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.433775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.324 qpair failed and we were unable to recover it. 00:31:37.324 [2024-12-07 05:46:40.434053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.434336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.434347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.324 qpair failed and we were unable to recover it. 00:31:37.324 [2024-12-07 05:46:40.434676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.434987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.434997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.324 qpair failed and we were unable to recover it. 00:31:37.324 [2024-12-07 05:46:40.435331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.435644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.435654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.324 qpair failed and we were unable to recover it. 00:31:37.324 [2024-12-07 05:46:40.435967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.436163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.436172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.324 qpair failed and we were unable to recover it. 00:31:37.324 [2024-12-07 05:46:40.436461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.436781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.436790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.324 qpair failed and we were unable to recover it. 00:31:37.324 [2024-12-07 05:46:40.437119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.437428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.437438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.324 qpair failed and we were unable to recover it. 00:31:37.324 [2024-12-07 05:46:40.437728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.438062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.438072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.324 qpair failed and we were unable to recover it. 00:31:37.324 [2024-12-07 05:46:40.438275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.438506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.438524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.324 qpair failed and we were unable to recover it. 00:31:37.324 [2024-12-07 05:46:40.438866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.439156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.439166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.324 qpair failed and we were unable to recover it. 00:31:37.324 [2024-12-07 05:46:40.439476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.439816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.439826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.324 qpair failed and we were unable to recover it. 00:31:37.324 [2024-12-07 05:46:40.440207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.440497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.440507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.324 qpair failed and we were unable to recover it. 00:31:37.324 [2024-12-07 05:46:40.440822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.324 [2024-12-07 05:46:40.441142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.441152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.325 qpair failed and we were unable to recover it. 00:31:37.325 [2024-12-07 05:46:40.441447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.441645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.441655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.325 qpair failed and we were unable to recover it. 00:31:37.325 [2024-12-07 05:46:40.441972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.442298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.442307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.325 qpair failed and we were unable to recover it. 00:31:37.325 [2024-12-07 05:46:40.442610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.442937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.442946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.325 qpair failed and we were unable to recover it. 00:31:37.325 [2024-12-07 05:46:40.443255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.443552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.443561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.325 qpair failed and we were unable to recover it. 00:31:37.325 [2024-12-07 05:46:40.443871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.444678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.444699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.325 qpair failed and we were unable to recover it. 00:31:37.325 [2024-12-07 05:46:40.445024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.446157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.446180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.325 qpair failed and we were unable to recover it. 00:31:37.325 [2024-12-07 05:46:40.446487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.446767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.446777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.325 qpair failed and we were unable to recover it. 00:31:37.325 [2024-12-07 05:46:40.447057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.447383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.447394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.325 qpair failed and we were unable to recover it. 00:31:37.325 [2024-12-07 05:46:40.447676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.447993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.448003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.325 qpair failed and we were unable to recover it. 00:31:37.325 [2024-12-07 05:46:40.448328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.448632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.448642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.325 qpair failed and we were unable to recover it. 00:31:37.325 [2024-12-07 05:46:40.448807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.449005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.449028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.325 qpair failed and we were unable to recover it. 00:31:37.325 [2024-12-07 05:46:40.449296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.449599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.449608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.325 qpair failed and we were unable to recover it. 00:31:37.325 [2024-12-07 05:46:40.449901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.450222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.450232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.325 qpair failed and we were unable to recover it. 00:31:37.325 [2024-12-07 05:46:40.450518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.450715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.450724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.325 qpair failed and we were unable to recover it. 00:31:37.325 [2024-12-07 05:46:40.451015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.451311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.451320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.325 qpair failed and we were unable to recover it. 00:31:37.325 [2024-12-07 05:46:40.451622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.451919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.451928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.325 qpair failed and we were unable to recover it. 00:31:37.325 [2024-12-07 05:46:40.452161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.452437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.452447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.325 qpair failed and we were unable to recover it. 00:31:37.325 [2024-12-07 05:46:40.452770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.453088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.453099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.325 qpair failed and we were unable to recover it. 00:31:37.325 [2024-12-07 05:46:40.453411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.453697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.453706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.325 qpair failed and we were unable to recover it. 00:31:37.325 [2024-12-07 05:46:40.453903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.454193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.454203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.325 qpair failed and we were unable to recover it. 00:31:37.325 [2024-12-07 05:46:40.454492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.454810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.454820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.325 qpair failed and we were unable to recover it. 00:31:37.325 [2024-12-07 05:46:40.455137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.455463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.455473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.325 qpair failed and we were unable to recover it. 00:31:37.325 [2024-12-07 05:46:40.455777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.456068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.456078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.325 qpair failed and we were unable to recover it. 00:31:37.325 [2024-12-07 05:46:40.456394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.456670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.456679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.325 qpair failed and we were unable to recover it. 00:31:37.325 [2024-12-07 05:46:40.456872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.457181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.457191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.325 qpair failed and we were unable to recover it. 00:31:37.325 [2024-12-07 05:46:40.457515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.457830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.457840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.325 qpair failed and we were unable to recover it. 00:31:37.325 [2024-12-07 05:46:40.458001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.458313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.458323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.325 qpair failed and we were unable to recover it. 00:31:37.325 [2024-12-07 05:46:40.458629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.458834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.325 [2024-12-07 05:46:40.458843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.325 qpair failed and we were unable to recover it. 00:31:37.326 [2024-12-07 05:46:40.459149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.459448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.459458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.326 qpair failed and we were unable to recover it. 00:31:37.326 [2024-12-07 05:46:40.459763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.460083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.460093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.326 qpair failed and we were unable to recover it. 00:31:37.326 [2024-12-07 05:46:40.460343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.460575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.460584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.326 qpair failed and we were unable to recover it. 00:31:37.326 [2024-12-07 05:46:40.460929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.461025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.461034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.326 qpair failed and we were unable to recover it. 00:31:37.326 [2024-12-07 05:46:40.461266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.461561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.461571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.326 qpair failed and we were unable to recover it. 00:31:37.326 [2024-12-07 05:46:40.461873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.462165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.462175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.326 qpair failed and we were unable to recover it. 00:31:37.326 [2024-12-07 05:46:40.462372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.462705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.462714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.326 qpair failed and we were unable to recover it. 00:31:37.326 [2024-12-07 05:46:40.462992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.463287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.463297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.326 qpair failed and we were unable to recover it. 00:31:37.326 [2024-12-07 05:46:40.463487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.463747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.463757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.326 qpair failed and we were unable to recover it. 00:31:37.326 [2024-12-07 05:46:40.464078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.464409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.464419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.326 qpair failed and we were unable to recover it. 00:31:37.326 [2024-12-07 05:46:40.464702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.465020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.465030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.326 qpair failed and we were unable to recover it. 00:31:37.326 [2024-12-07 05:46:40.465352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.465679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.465689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.326 qpair failed and we were unable to recover it. 00:31:37.326 [2024-12-07 05:46:40.465865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.466133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.466144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.326 qpair failed and we were unable to recover it. 00:31:37.326 [2024-12-07 05:46:40.466437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.466731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.466740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.326 qpair failed and we were unable to recover it. 00:31:37.326 [2024-12-07 05:46:40.467023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.467336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.467345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.326 qpair failed and we were unable to recover it. 00:31:37.326 [2024-12-07 05:46:40.467656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.467984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.467994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.326 qpair failed and we were unable to recover it. 00:31:37.326 [2024-12-07 05:46:40.468303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.468607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.468617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.326 qpair failed and we were unable to recover it. 00:31:37.326 [2024-12-07 05:46:40.468821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.469102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.469112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.326 qpair failed and we were unable to recover it. 00:31:37.326 [2024-12-07 05:46:40.469404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.469681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.469691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.326 qpair failed and we were unable to recover it. 00:31:37.326 [2024-12-07 05:46:40.469861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.470140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.470150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.326 qpair failed and we were unable to recover it. 00:31:37.326 [2024-12-07 05:46:40.470456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.470773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.470782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.326 qpair failed and we were unable to recover it. 00:31:37.326 [2024-12-07 05:46:40.471088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.471409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.471418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.326 qpair failed and we were unable to recover it. 00:31:37.326 [2024-12-07 05:46:40.471697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.471987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.471996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.326 qpair failed and we were unable to recover it. 00:31:37.326 [2024-12-07 05:46:40.472312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.472580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.472590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.326 qpair failed and we were unable to recover it. 00:31:37.326 [2024-12-07 05:46:40.472921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.473228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.473239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.326 qpair failed and we were unable to recover it. 00:31:37.326 [2024-12-07 05:46:40.473541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.473869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.473879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.326 qpair failed and we were unable to recover it. 00:31:37.326 [2024-12-07 05:46:40.474203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.474490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.474500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.326 qpair failed and we were unable to recover it. 00:31:37.326 [2024-12-07 05:46:40.474803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.475145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.475155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.326 qpair failed and we were unable to recover it. 00:31:37.326 [2024-12-07 05:46:40.475447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.326 [2024-12-07 05:46:40.475772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.475781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.327 qpair failed and we were unable to recover it. 00:31:37.327 [2024-12-07 05:46:40.475998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.476336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.476345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.327 qpair failed and we were unable to recover it. 00:31:37.327 [2024-12-07 05:46:40.476630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.476947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.476956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.327 qpair failed and we were unable to recover it. 00:31:37.327 [2024-12-07 05:46:40.477244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.477541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.477550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.327 qpair failed and we were unable to recover it. 00:31:37.327 [2024-12-07 05:46:40.477840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.478156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.478166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.327 qpair failed and we were unable to recover it. 00:31:37.327 [2024-12-07 05:46:40.478469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.478763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.478774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.327 qpair failed and we were unable to recover it. 00:31:37.327 [2024-12-07 05:46:40.479074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.479387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.479396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.327 qpair failed and we were unable to recover it. 00:31:37.327 [2024-12-07 05:46:40.479676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.479964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.479973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.327 qpair failed and we were unable to recover it. 00:31:37.327 [2024-12-07 05:46:40.480283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.480584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.480593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.327 qpair failed and we were unable to recover it. 00:31:37.327 [2024-12-07 05:46:40.480900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.481092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.481102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.327 qpair failed and we were unable to recover it. 00:31:37.327 [2024-12-07 05:46:40.481402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.481709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.481719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.327 qpair failed and we were unable to recover it. 00:31:37.327 [2024-12-07 05:46:40.482006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.482211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.482221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.327 qpair failed and we were unable to recover it. 00:31:37.327 [2024-12-07 05:46:40.482432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.482696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.482706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.327 qpair failed and we were unable to recover it. 00:31:37.327 [2024-12-07 05:46:40.483009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.483322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.483332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.327 qpair failed and we were unable to recover it. 00:31:37.327 [2024-12-07 05:46:40.483545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.483866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.483876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.327 qpair failed and we were unable to recover it. 00:31:37.327 [2024-12-07 05:46:40.484180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.484373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.484383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.327 qpair failed and we were unable to recover it. 00:31:37.327 [2024-12-07 05:46:40.484695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.484998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.485008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.327 qpair failed and we were unable to recover it. 00:31:37.327 [2024-12-07 05:46:40.485344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.485637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.485646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.327 qpair failed and we were unable to recover it. 00:31:37.327 [2024-12-07 05:46:40.485950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.486250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.486260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.327 qpair failed and we were unable to recover it. 00:31:37.327 [2024-12-07 05:46:40.486552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.486880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.486889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.327 qpair failed and we were unable to recover it. 00:31:37.327 [2024-12-07 05:46:40.487175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.487496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.487506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.327 qpair failed and we were unable to recover it. 00:31:37.327 [2024-12-07 05:46:40.487803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.488114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.488124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.327 qpair failed and we were unable to recover it. 00:31:37.327 [2024-12-07 05:46:40.488462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.488781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.488790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.327 qpair failed and we were unable to recover it. 00:31:37.327 [2024-12-07 05:46:40.488965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.489225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.489235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.327 qpair failed and we were unable to recover it. 00:31:37.327 [2024-12-07 05:46:40.489531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.489849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.489859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.327 qpair failed and we were unable to recover it. 00:31:37.327 [2024-12-07 05:46:40.490155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.327 [2024-12-07 05:46:40.490450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.490460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.328 qpair failed and we were unable to recover it. 00:31:37.328 [2024-12-07 05:46:40.490761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.491057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.491067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.328 qpair failed and we were unable to recover it. 00:31:37.328 [2024-12-07 05:46:40.491309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.491392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.491401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.328 qpair failed and we were unable to recover it. 00:31:37.328 [2024-12-07 05:46:40.491563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.491762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.491772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.328 qpair failed and we were unable to recover it. 00:31:37.328 [2024-12-07 05:46:40.492082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.492263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.492273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.328 qpair failed and we were unable to recover it. 00:31:37.328 [2024-12-07 05:46:40.492437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.492713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.492723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.328 qpair failed and we were unable to recover it. 00:31:37.328 [2024-12-07 05:46:40.493050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.493273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.493283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.328 qpair failed and we were unable to recover it. 00:31:37.328 [2024-12-07 05:46:40.493598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.493915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.493926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.328 qpair failed and we were unable to recover it. 00:31:37.328 [2024-12-07 05:46:40.494140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.494474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.494484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.328 qpair failed and we were unable to recover it. 00:31:37.328 [2024-12-07 05:46:40.494803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.495093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.495104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.328 qpair failed and we were unable to recover it. 00:31:37.328 [2024-12-07 05:46:40.495418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.495717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.495727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.328 qpair failed and we were unable to recover it. 00:31:37.328 [2024-12-07 05:46:40.495919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.496287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.496297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.328 qpair failed and we were unable to recover it. 00:31:37.328 [2024-12-07 05:46:40.496613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.496938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.496947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.328 qpair failed and we were unable to recover it. 00:31:37.328 [2024-12-07 05:46:40.497240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.497553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.497562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.328 qpair failed and we were unable to recover it. 00:31:37.328 [2024-12-07 05:46:40.497834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.498168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.498178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.328 qpair failed and we were unable to recover it. 00:31:37.328 [2024-12-07 05:46:40.498460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.498762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.498772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.328 qpair failed and we were unable to recover it. 00:31:37.328 [2024-12-07 05:46:40.499074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.499282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.499291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.328 qpair failed and we were unable to recover it. 00:31:37.328 [2024-12-07 05:46:40.499627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.499940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.499950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.328 qpair failed and we were unable to recover it. 00:31:37.328 [2024-12-07 05:46:40.500250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.500538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.500548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.328 qpair failed and we were unable to recover it. 00:31:37.328 [2024-12-07 05:46:40.500827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.500984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.500995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.328 qpair failed and we were unable to recover it. 00:31:37.328 [2024-12-07 05:46:40.501303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.501476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.501486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.328 qpair failed and we were unable to recover it. 00:31:37.328 [2024-12-07 05:46:40.501790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.502159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.502168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.328 qpair failed and we were unable to recover it. 00:31:37.328 [2024-12-07 05:46:40.502432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.502772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.502781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.328 qpair failed and we were unable to recover it. 00:31:37.328 [2024-12-07 05:46:40.503079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.503255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.503265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.328 qpair failed and we were unable to recover it. 00:31:37.328 [2024-12-07 05:46:40.503489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.503861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.503871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.328 qpair failed and we were unable to recover it. 00:31:37.328 [2024-12-07 05:46:40.504161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.504501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.504511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.328 qpair failed and we were unable to recover it. 00:31:37.328 [2024-12-07 05:46:40.504802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.505080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.505090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.328 qpair failed and we were unable to recover it. 00:31:37.328 [2024-12-07 05:46:40.505396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.505725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.505734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.328 qpair failed and we were unable to recover it. 00:31:37.328 [2024-12-07 05:46:40.506039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.506311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.328 [2024-12-07 05:46:40.506320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.328 qpair failed and we were unable to recover it. 00:31:37.328 [2024-12-07 05:46:40.506607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.506899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.506908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.329 qpair failed and we were unable to recover it. 00:31:37.329 [2024-12-07 05:46:40.507295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.507599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.507609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.329 qpair failed and we were unable to recover it. 00:31:37.329 [2024-12-07 05:46:40.507796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.508064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.508077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.329 qpair failed and we were unable to recover it. 00:31:37.329 [2024-12-07 05:46:40.508397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.508709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.508719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.329 qpair failed and we were unable to recover it. 00:31:37.329 [2024-12-07 05:46:40.509021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.509306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.509315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.329 qpair failed and we were unable to recover it. 00:31:37.329 [2024-12-07 05:46:40.509618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.509925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.509934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.329 qpair failed and we were unable to recover it. 00:31:37.329 [2024-12-07 05:46:40.510271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.510558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.510568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.329 qpair failed and we were unable to recover it. 00:31:37.329 [2024-12-07 05:46:40.510772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.511053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.511063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.329 qpair failed and we were unable to recover it. 00:31:37.329 [2024-12-07 05:46:40.511364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.511667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.511676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.329 qpair failed and we were unable to recover it. 00:31:37.329 [2024-12-07 05:46:40.511979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.512300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.512309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.329 qpair failed and we were unable to recover it. 00:31:37.329 [2024-12-07 05:46:40.512594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.512885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.512894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.329 qpair failed and we were unable to recover it. 00:31:37.329 [2024-12-07 05:46:40.513181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.513503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.513513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.329 qpair failed and we were unable to recover it. 00:31:37.329 [2024-12-07 05:46:40.513802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.514129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.514138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.329 qpair failed and we were unable to recover it. 00:31:37.329 [2024-12-07 05:46:40.514454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.514736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.514745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.329 qpair failed and we were unable to recover it. 00:31:37.329 [2024-12-07 05:46:40.515061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.515393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.515403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.329 qpair failed and we were unable to recover it. 00:31:37.329 [2024-12-07 05:46:40.515585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.515752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.515762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.329 qpair failed and we were unable to recover it. 00:31:37.329 [2024-12-07 05:46:40.516159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.516446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.516456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.329 qpair failed and we were unable to recover it. 00:31:37.329 [2024-12-07 05:46:40.516736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.517051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.517060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.329 qpair failed and we were unable to recover it. 00:31:37.329 [2024-12-07 05:46:40.517377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.517633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.517643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.329 qpair failed and we were unable to recover it. 00:31:37.329 [2024-12-07 05:46:40.518001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.518287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.518298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.329 qpair failed and we were unable to recover it. 00:31:37.329 [2024-12-07 05:46:40.518511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.518825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.518835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.329 qpair failed and we were unable to recover it. 00:31:37.329 [2024-12-07 05:46:40.519138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.519453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.519462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.329 qpair failed and we were unable to recover it. 00:31:37.329 [2024-12-07 05:46:40.519745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.520064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.520075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.329 qpair failed and we were unable to recover it. 00:31:37.329 [2024-12-07 05:46:40.520396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.520690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.520700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.329 qpair failed and we were unable to recover it. 00:31:37.329 [2024-12-07 05:46:40.521001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.521326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.521335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.329 qpair failed and we were unable to recover it. 00:31:37.329 [2024-12-07 05:46:40.521636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.521946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.521956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.329 qpair failed and we were unable to recover it. 00:31:37.329 [2024-12-07 05:46:40.522243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.522534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.522543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.329 qpair failed and we were unable to recover it. 00:31:37.329 [2024-12-07 05:46:40.522860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.523186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.523196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.329 qpair failed and we were unable to recover it. 00:31:37.329 [2024-12-07 05:46:40.523500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.523815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.329 [2024-12-07 05:46:40.523824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.329 qpair failed and we were unable to recover it. 00:31:37.330 [2024-12-07 05:46:40.524124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.524444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.524454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.330 qpair failed and we were unable to recover it. 00:31:37.330 [2024-12-07 05:46:40.524816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.525131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.525140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.330 qpair failed and we were unable to recover it. 00:31:37.330 [2024-12-07 05:46:40.525402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.525733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.525742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.330 qpair failed and we were unable to recover it. 00:31:37.330 [2024-12-07 05:46:40.526028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.526115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.526126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.330 qpair failed and we were unable to recover it. 00:31:37.330 [2024-12-07 05:46:40.526491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.526783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.526793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.330 qpair failed and we were unable to recover it. 00:31:37.330 [2024-12-07 05:46:40.526991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.527293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.527303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.330 qpair failed and we were unable to recover it. 00:31:37.330 [2024-12-07 05:46:40.527605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.527895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.527905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.330 qpair failed and we were unable to recover it. 00:31:37.330 [2024-12-07 05:46:40.528196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.528500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.528510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.330 qpair failed and we were unable to recover it. 00:31:37.330 [2024-12-07 05:46:40.528814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.529069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.529079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.330 qpair failed and we were unable to recover it. 00:31:37.330 [2024-12-07 05:46:40.529406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.529711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.529720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.330 qpair failed and we were unable to recover it. 00:31:37.330 [2024-12-07 05:46:40.529995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.530320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.530330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.330 qpair failed and we were unable to recover it. 00:31:37.330 [2024-12-07 05:46:40.530638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.530914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.530924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.330 qpair failed and we were unable to recover it. 00:31:37.330 [2024-12-07 05:46:40.531242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.531554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.531565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.330 qpair failed and we were unable to recover it. 00:31:37.330 [2024-12-07 05:46:40.531846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.532156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.532166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.330 qpair failed and we were unable to recover it. 00:31:37.330 [2024-12-07 05:46:40.532471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.532797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.532811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.330 qpair failed and we were unable to recover it. 00:31:37.330 [2024-12-07 05:46:40.533115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.533287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.533298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.330 qpair failed and we were unable to recover it. 00:31:37.330 [2024-12-07 05:46:40.533621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.533893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.533903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.330 qpair failed and we were unable to recover it. 00:31:37.330 [2024-12-07 05:46:40.534223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.534614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.534624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.330 qpair failed and we were unable to recover it. 00:31:37.330 [2024-12-07 05:46:40.534914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.535218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.535227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.330 qpair failed and we were unable to recover it. 00:31:37.330 [2024-12-07 05:46:40.535513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.535782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.535792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.330 qpair failed and we were unable to recover it. 00:31:37.330 [2024-12-07 05:46:40.536075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.536409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.536418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.330 qpair failed and we were unable to recover it. 00:31:37.330 [2024-12-07 05:46:40.536678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.536992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.537001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.330 qpair failed and we were unable to recover it. 00:31:37.330 [2024-12-07 05:46:40.537216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.537510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.537520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.330 qpair failed and we were unable to recover it. 00:31:37.330 [2024-12-07 05:46:40.537805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.538122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.538132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.330 qpair failed and we were unable to recover it. 00:31:37.330 [2024-12-07 05:46:40.538443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.538771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.538780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.330 qpair failed and we were unable to recover it. 00:31:37.330 [2024-12-07 05:46:40.539151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.539430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.539439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.330 qpair failed and we were unable to recover it. 00:31:37.330 [2024-12-07 05:46:40.539708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.540030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.540040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.330 qpair failed and we were unable to recover it. 00:31:37.330 [2024-12-07 05:46:40.540324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.540619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.330 [2024-12-07 05:46:40.540629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.330 qpair failed and we were unable to recover it. 00:31:37.602 [2024-12-07 05:46:40.540952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.602 [2024-12-07 05:46:40.541187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.602 [2024-12-07 05:46:40.541199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.602 qpair failed and we were unable to recover it. 00:31:37.602 [2024-12-07 05:46:40.541490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.602 [2024-12-07 05:46:40.541794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.602 [2024-12-07 05:46:40.541804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.602 qpair failed and we were unable to recover it. 00:31:37.602 [2024-12-07 05:46:40.542113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.602 [2024-12-07 05:46:40.542410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.602 [2024-12-07 05:46:40.542419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.602 qpair failed and we were unable to recover it. 00:31:37.602 [2024-12-07 05:46:40.542722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.602 [2024-12-07 05:46:40.543049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.602 [2024-12-07 05:46:40.543060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.602 qpair failed and we were unable to recover it. 00:31:37.602 [2024-12-07 05:46:40.543362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.602 [2024-12-07 05:46:40.543670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.602 [2024-12-07 05:46:40.543679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.602 qpair failed and we were unable to recover it. 00:31:37.602 [2024-12-07 05:46:40.544053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.602 [2024-12-07 05:46:40.544202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.602 [2024-12-07 05:46:40.544212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.602 qpair failed and we were unable to recover it. 00:31:37.602 [2024-12-07 05:46:40.544434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.602 [2024-12-07 05:46:40.544759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.602 [2024-12-07 05:46:40.544768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.602 qpair failed and we were unable to recover it. 00:31:37.602 [2024-12-07 05:46:40.544960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.602 [2024-12-07 05:46:40.545307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.602 [2024-12-07 05:46:40.545316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.602 qpair failed and we were unable to recover it. 00:31:37.602 [2024-12-07 05:46:40.545620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.602 [2024-12-07 05:46:40.545946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.602 [2024-12-07 05:46:40.545955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.602 qpair failed and we were unable to recover it. 00:31:37.602 [2024-12-07 05:46:40.546123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.602 [2024-12-07 05:46:40.546341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.602 [2024-12-07 05:46:40.546351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.602 qpair failed and we were unable to recover it. 00:31:37.602 [2024-12-07 05:46:40.546641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.602 [2024-12-07 05:46:40.546947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.602 [2024-12-07 05:46:40.546957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.602 qpair failed and we were unable to recover it. 00:31:37.603 [2024-12-07 05:46:40.547268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.547585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.547595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.603 qpair failed and we were unable to recover it. 00:31:37.603 [2024-12-07 05:46:40.547913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.548221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.548232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.603 qpair failed and we were unable to recover it. 00:31:37.603 [2024-12-07 05:46:40.548518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.548819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.548829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.603 qpair failed and we were unable to recover it. 00:31:37.603 [2024-12-07 05:46:40.549145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.549452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.549462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.603 qpair failed and we were unable to recover it. 00:31:37.603 [2024-12-07 05:46:40.549743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.549940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.549950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.603 qpair failed and we were unable to recover it. 00:31:37.603 [2024-12-07 05:46:40.550252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.550563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.550573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.603 qpair failed and we were unable to recover it. 00:31:37.603 [2024-12-07 05:46:40.550774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.550983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.550993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.603 qpair failed and we were unable to recover it. 00:31:37.603 [2024-12-07 05:46:40.551220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.551607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.551618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.603 qpair failed and we were unable to recover it. 00:31:37.603 [2024-12-07 05:46:40.551975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.552286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.552296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.603 qpair failed and we were unable to recover it. 00:31:37.603 [2024-12-07 05:46:40.552610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.552881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.552893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.603 qpair failed and we were unable to recover it. 00:31:37.603 [2024-12-07 05:46:40.553199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.553410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.553420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.603 qpair failed and we were unable to recover it. 00:31:37.603 [2024-12-07 05:46:40.553631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.553824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.553834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.603 qpair failed and we were unable to recover it. 00:31:37.603 [2024-12-07 05:46:40.554108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.554306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.554317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.603 qpair failed and we were unable to recover it. 00:31:37.603 [2024-12-07 05:46:40.554526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.554800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.554810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.603 qpair failed and we were unable to recover it. 00:31:37.603 [2024-12-07 05:46:40.555014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.555192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.555202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.603 qpair failed and we were unable to recover it. 00:31:37.603 [2024-12-07 05:46:40.555522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.555817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.555827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.603 qpair failed and we were unable to recover it. 00:31:37.603 [2024-12-07 05:46:40.556135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.556483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.556495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.603 qpair failed and we were unable to recover it. 00:31:37.603 [2024-12-07 05:46:40.556811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.556991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.557002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.603 qpair failed and we were unable to recover it. 00:31:37.603 [2024-12-07 05:46:40.557196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.557466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.557476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.603 qpair failed and we were unable to recover it. 00:31:37.603 [2024-12-07 05:46:40.557792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.558101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.558120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.603 qpair failed and we were unable to recover it. 00:31:37.603 [2024-12-07 05:46:40.558286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.558462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.558472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.603 qpair failed and we were unable to recover it. 00:31:37.603 [2024-12-07 05:46:40.558810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.559115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.559126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.603 qpair failed and we were unable to recover it. 00:31:37.603 [2024-12-07 05:46:40.559427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.559707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.603 [2024-12-07 05:46:40.559717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.604 qpair failed and we were unable to recover it. 00:31:37.604 [2024-12-07 05:46:40.560028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.560307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.560317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.604 qpair failed and we were unable to recover it. 00:31:37.604 [2024-12-07 05:46:40.560662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.560958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.560968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.604 qpair failed and we were unable to recover it. 00:31:37.604 [2024-12-07 05:46:40.561256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.561585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.561595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.604 qpair failed and we were unable to recover it. 00:31:37.604 [2024-12-07 05:46:40.561788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.562104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.562117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.604 qpair failed and we were unable to recover it. 00:31:37.604 [2024-12-07 05:46:40.562435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.562766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.562775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.604 qpair failed and we were unable to recover it. 00:31:37.604 [2024-12-07 05:46:40.563075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.563432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.563442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.604 qpair failed and we were unable to recover it. 00:31:37.604 [2024-12-07 05:46:40.563637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.563872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.563882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.604 qpair failed and we were unable to recover it. 00:31:37.604 [2024-12-07 05:46:40.564234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.564431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.564441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.604 qpair failed and we were unable to recover it. 00:31:37.604 [2024-12-07 05:46:40.564507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.564593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.564603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.604 qpair failed and we were unable to recover it. 00:31:37.604 [2024-12-07 05:46:40.564850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.565117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.565127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.604 qpair failed and we were unable to recover it. 00:31:37.604 [2024-12-07 05:46:40.565458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.565753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.565763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.604 qpair failed and we were unable to recover it. 00:31:37.604 [2024-12-07 05:46:40.566059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.566364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.566373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.604 qpair failed and we were unable to recover it. 00:31:37.604 [2024-12-07 05:46:40.566679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.566876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.566886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.604 qpair failed and we were unable to recover it. 00:31:37.604 [2024-12-07 05:46:40.567109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.567423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.567432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.604 qpair failed and we were unable to recover it. 00:31:37.604 [2024-12-07 05:46:40.567731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.567926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.567935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.604 qpair failed and we were unable to recover it. 00:31:37.604 [2024-12-07 05:46:40.568273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.568553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.568564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.604 qpair failed and we were unable to recover it. 00:31:37.604 [2024-12-07 05:46:40.568823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.568972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.568982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.604 qpair failed and we were unable to recover it. 00:31:37.604 [2024-12-07 05:46:40.569254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.569582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.569594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.604 qpair failed and we were unable to recover it. 00:31:37.604 [2024-12-07 05:46:40.569900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.570265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.570275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.604 qpair failed and we were unable to recover it. 00:31:37.604 [2024-12-07 05:46:40.570582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.570900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.570909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.604 qpair failed and we were unable to recover it. 00:31:37.604 [2024-12-07 05:46:40.571247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.571559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.571568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.604 qpair failed and we were unable to recover it. 00:31:37.604 [2024-12-07 05:46:40.571883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.572161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.572171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.604 qpair failed and we were unable to recover it. 00:31:37.604 [2024-12-07 05:46:40.572444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.572614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.604 [2024-12-07 05:46:40.572625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.605 qpair failed and we were unable to recover it. 00:31:37.605 [2024-12-07 05:46:40.572807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.573075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.573086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.605 qpair failed and we were unable to recover it. 00:31:37.605 [2024-12-07 05:46:40.573213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.573481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.573491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.605 qpair failed and we were unable to recover it. 00:31:37.605 [2024-12-07 05:46:40.573801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.574108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.574117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.605 qpair failed and we were unable to recover it. 00:31:37.605 [2024-12-07 05:46:40.574325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.574628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.574637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.605 qpair failed and we were unable to recover it. 00:31:37.605 [2024-12-07 05:46:40.574946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.575241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.575251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.605 qpair failed and we were unable to recover it. 00:31:37.605 [2024-12-07 05:46:40.575574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.575906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.575915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.605 qpair failed and we were unable to recover it. 00:31:37.605 [2024-12-07 05:46:40.576200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.576508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.576518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.605 qpair failed and we were unable to recover it. 00:31:37.605 [2024-12-07 05:46:40.576831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.577152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.577161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.605 qpair failed and we were unable to recover it. 00:31:37.605 [2024-12-07 05:46:40.577458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.577788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.577798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.605 qpair failed and we were unable to recover it. 00:31:37.605 [2024-12-07 05:46:40.578123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.578429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.578438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.605 qpair failed and we were unable to recover it. 00:31:37.605 [2024-12-07 05:46:40.578751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.579071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.579081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.605 qpair failed and we were unable to recover it. 00:31:37.605 [2024-12-07 05:46:40.579370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.579739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.579748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.605 qpair failed and we were unable to recover it. 00:31:37.605 [2024-12-07 05:46:40.579910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.580114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.580130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.605 qpair failed and we were unable to recover it. 00:31:37.605 [2024-12-07 05:46:40.580342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.580660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.580670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.605 qpair failed and we were unable to recover it. 00:31:37.605 [2024-12-07 05:46:40.580846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.581111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.581121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.605 qpair failed and we were unable to recover it. 00:31:37.605 [2024-12-07 05:46:40.581485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.581791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.581801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.605 qpair failed and we were unable to recover it. 00:31:37.605 [2024-12-07 05:46:40.582107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.582342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.582351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.605 qpair failed and we were unable to recover it. 00:31:37.605 [2024-12-07 05:46:40.582632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.582954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.582964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.605 qpair failed and we were unable to recover it. 00:31:37.605 [2024-12-07 05:46:40.583256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.583583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.583592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.605 qpair failed and we were unable to recover it. 00:31:37.605 [2024-12-07 05:46:40.583755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.583916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.583926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.605 qpair failed and we were unable to recover it. 00:31:37.605 [2024-12-07 05:46:40.584088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.584377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.584387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.605 qpair failed and we were unable to recover it. 00:31:37.605 [2024-12-07 05:46:40.584689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.585007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.585021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.605 qpair failed and we were unable to recover it. 00:31:37.605 [2024-12-07 05:46:40.585242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.585455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.585465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.605 qpair failed and we were unable to recover it. 00:31:37.605 [2024-12-07 05:46:40.585736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.605 [2024-12-07 05:46:40.585923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.585933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.606 qpair failed and we were unable to recover it. 00:31:37.606 [2024-12-07 05:46:40.586159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.586327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.586336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.606 qpair failed and we were unable to recover it. 00:31:37.606 [2024-12-07 05:46:40.586678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.586967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.586976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.606 qpair failed and we were unable to recover it. 00:31:37.606 [2024-12-07 05:46:40.587187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.587368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.587379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.606 qpair failed and we were unable to recover it. 00:31:37.606 [2024-12-07 05:46:40.587695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.587849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.587858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.606 qpair failed and we were unable to recover it. 00:31:37.606 [2024-12-07 05:46:40.588214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.588426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.588436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.606 qpair failed and we were unable to recover it. 00:31:37.606 [2024-12-07 05:46:40.588638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.588909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.588919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.606 qpair failed and we were unable to recover it. 00:31:37.606 [2024-12-07 05:46:40.589309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.589675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.589685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.606 qpair failed and we were unable to recover it. 00:31:37.606 [2024-12-07 05:46:40.589995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.590213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.590224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.606 qpair failed and we were unable to recover it. 00:31:37.606 [2024-12-07 05:46:40.590440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.590744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.590755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.606 qpair failed and we were unable to recover it. 00:31:37.606 [2024-12-07 05:46:40.591066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.591411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.591421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.606 qpair failed and we were unable to recover it. 00:31:37.606 [2024-12-07 05:46:40.591734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.591956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.591965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.606 qpair failed and we were unable to recover it. 00:31:37.606 [2024-12-07 05:46:40.592268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.592483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.592493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.606 qpair failed and we were unable to recover it. 00:31:37.606 [2024-12-07 05:46:40.592688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.592859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.592868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.606 qpair failed and we were unable to recover it. 00:31:37.606 [2024-12-07 05:46:40.593242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.593560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.593569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.606 qpair failed and we were unable to recover it. 00:31:37.606 [2024-12-07 05:46:40.593873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.594193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.594203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.606 qpair failed and we were unable to recover it. 00:31:37.606 [2024-12-07 05:46:40.594556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.594854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.594863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.606 qpair failed and we were unable to recover it. 00:31:37.606 [2024-12-07 05:46:40.595165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.595493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.595503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.606 qpair failed and we were unable to recover it. 00:31:37.606 [2024-12-07 05:46:40.595702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.596043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.596053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.606 qpair failed and we were unable to recover it. 00:31:37.606 [2024-12-07 05:46:40.596373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.596561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.596570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.606 qpair failed and we were unable to recover it. 00:31:37.606 [2024-12-07 05:46:40.596890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.597220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.597230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.606 qpair failed and we were unable to recover it. 00:31:37.606 [2024-12-07 05:46:40.597520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.597700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.597710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.606 qpair failed and we were unable to recover it. 00:31:37.606 [2024-12-07 05:46:40.598032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.598235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.606 [2024-12-07 05:46:40.598245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.606 qpair failed and we were unable to recover it. 00:31:37.606 [2024-12-07 05:46:40.598546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.598861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.598871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.607 qpair failed and we were unable to recover it. 00:31:37.607 [2024-12-07 05:46:40.599182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.599487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.599496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.607 qpair failed and we were unable to recover it. 00:31:37.607 [2024-12-07 05:46:40.599778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.600118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.600127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.607 qpair failed and we were unable to recover it. 00:31:37.607 [2024-12-07 05:46:40.600409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.600723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.600732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.607 qpair failed and we were unable to recover it. 00:31:37.607 [2024-12-07 05:46:40.601042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.601373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.601383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.607 qpair failed and we were unable to recover it. 00:31:37.607 [2024-12-07 05:46:40.601689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.601836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.601847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.607 qpair failed and we were unable to recover it. 00:31:37.607 [2024-12-07 05:46:40.602065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.602360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.602369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.607 qpair failed and we were unable to recover it. 00:31:37.607 [2024-12-07 05:46:40.602676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.602971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.602980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.607 qpair failed and we were unable to recover it. 00:31:37.607 [2024-12-07 05:46:40.603280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.603554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.603563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.607 qpair failed and we were unable to recover it. 00:31:37.607 [2024-12-07 05:46:40.603868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.604178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.604188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.607 qpair failed and we were unable to recover it. 00:31:37.607 [2024-12-07 05:46:40.604461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.604776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.604785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.607 qpair failed and we were unable to recover it. 00:31:37.607 [2024-12-07 05:46:40.605075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.605277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.605287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.607 qpair failed and we were unable to recover it. 00:31:37.607 [2024-12-07 05:46:40.605544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.605879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.605889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.607 qpair failed and we were unable to recover it. 00:31:37.607 [2024-12-07 05:46:40.606171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.606376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.606385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.607 qpair failed and we were unable to recover it. 00:31:37.607 [2024-12-07 05:46:40.606755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.607036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.607046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.607 qpair failed and we were unable to recover it. 00:31:37.607 [2024-12-07 05:46:40.607412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.607727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.607738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.607 qpair failed and we were unable to recover it. 00:31:37.607 [2024-12-07 05:46:40.608039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.608315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.608325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.607 qpair failed and we were unable to recover it. 00:31:37.607 [2024-12-07 05:46:40.608636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.608830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.608839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.607 qpair failed and we were unable to recover it. 00:31:37.607 [2024-12-07 05:46:40.609083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.609394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.609403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.607 qpair failed and we were unable to recover it. 00:31:37.607 [2024-12-07 05:46:40.609713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.610022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.610032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.607 qpair failed and we were unable to recover it. 00:31:37.607 [2024-12-07 05:46:40.610302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.610482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.610492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.607 qpair failed and we were unable to recover it. 00:31:37.607 [2024-12-07 05:46:40.610893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.611212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.611222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.607 qpair failed and we were unable to recover it. 00:31:37.607 [2024-12-07 05:46:40.611528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.611843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.611854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.607 qpair failed and we were unable to recover it. 00:31:37.607 [2024-12-07 05:46:40.612161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.612459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.607 [2024-12-07 05:46:40.612470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.608 qpair failed and we were unable to recover it. 00:31:37.608 [2024-12-07 05:46:40.612786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.608 [2024-12-07 05:46:40.613073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.608 [2024-12-07 05:46:40.613083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.608 qpair failed and we were unable to recover it. 00:31:37.608 [2024-12-07 05:46:40.613363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.608 [2024-12-07 05:46:40.613681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.608 [2024-12-07 05:46:40.613691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.608 qpair failed and we were unable to recover it. 00:31:37.608 [2024-12-07 05:46:40.613997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.608 [2024-12-07 05:46:40.614388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.608 [2024-12-07 05:46:40.614400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.608 qpair failed and we were unable to recover it. 00:31:37.608 [2024-12-07 05:46:40.614716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.608 [2024-12-07 05:46:40.615043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.608 [2024-12-07 05:46:40.615053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.608 qpair failed and we were unable to recover it. 00:31:37.608 [2024-12-07 05:46:40.615375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.608 [2024-12-07 05:46:40.615686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.608 [2024-12-07 05:46:40.615696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.608 qpair failed and we were unable to recover it. 00:31:37.608 [2024-12-07 05:46:40.616000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.608 [2024-12-07 05:46:40.616301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.608 [2024-12-07 05:46:40.616310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.608 qpair failed and we were unable to recover it. 00:31:37.608 [2024-12-07 05:46:40.616601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.608 [2024-12-07 05:46:40.616883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.608 [2024-12-07 05:46:40.616893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.608 qpair failed and we were unable to recover it. 00:31:37.608 [2024-12-07 05:46:40.617186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.608 [2024-12-07 05:46:40.617591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.608 [2024-12-07 05:46:40.617600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.608 qpair failed and we were unable to recover it. 00:31:37.608 [2024-12-07 05:46:40.617897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.608 [2024-12-07 05:46:40.618189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.608 [2024-12-07 05:46:40.618199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.608 qpair failed and we were unable to recover it. 00:31:37.608 [2024-12-07 05:46:40.618478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.608 [2024-12-07 05:46:40.618815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.608 [2024-12-07 05:46:40.618825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.608 qpair failed and we were unable to recover it. 00:31:37.608 [2024-12-07 05:46:40.619054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.608 [2024-12-07 05:46:40.619277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.608 [2024-12-07 05:46:40.619286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.608 qpair failed and we were unable to recover it. 00:31:37.608 [2024-12-07 05:46:40.619579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.608 [2024-12-07 05:46:40.619869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.608 [2024-12-07 05:46:40.619877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.608 qpair failed and we were unable to recover it. 00:31:37.608 [2024-12-07 05:46:40.620179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.608 [2024-12-07 05:46:40.620492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.608 [2024-12-07 05:46:40.620501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.608 qpair failed and we were unable to recover it. 00:31:37.608 [2024-12-07 05:46:40.620848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.608 [2024-12-07 05:46:40.621072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.608 [2024-12-07 05:46:40.621083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.608 qpair failed and we were unable to recover it. 00:31:37.608 [2024-12-07 05:46:40.621415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.608 [2024-12-07 05:46:40.621706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.608 [2024-12-07 05:46:40.621717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.608 qpair failed and we were unable to recover it. 00:31:37.608 [2024-12-07 05:46:40.621914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.608 [2024-12-07 05:46:40.622225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.608 [2024-12-07 05:46:40.622234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.608 qpair failed and we were unable to recover it. 00:31:37.608 [2024-12-07 05:46:40.622433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.608 [2024-12-07 05:46:40.622802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.622812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.609 qpair failed and we were unable to recover it. 00:31:37.609 [2024-12-07 05:46:40.623018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.623330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.623340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.609 qpair failed and we were unable to recover it. 00:31:37.609 [2024-12-07 05:46:40.623625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.623904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.623914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.609 qpair failed and we were unable to recover it. 00:31:37.609 [2024-12-07 05:46:40.624221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.624527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.624537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.609 qpair failed and we were unable to recover it. 00:31:37.609 [2024-12-07 05:46:40.624842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.625142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.625153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.609 qpair failed and we were unable to recover it. 00:31:37.609 [2024-12-07 05:46:40.625467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.625809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.625820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.609 qpair failed and we were unable to recover it. 00:31:37.609 [2024-12-07 05:46:40.626112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.626317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.626326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.609 qpair failed and we were unable to recover it. 00:31:37.609 [2024-12-07 05:46:40.626542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.626877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.626886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.609 qpair failed and we were unable to recover it. 00:31:37.609 [2024-12-07 05:46:40.627244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.627576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.627586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.609 qpair failed and we were unable to recover it. 00:31:37.609 [2024-12-07 05:46:40.627900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.628226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.628237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.609 qpair failed and we were unable to recover it. 00:31:37.609 [2024-12-07 05:46:40.628538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.628847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.628857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.609 qpair failed and we were unable to recover it. 00:31:37.609 [2024-12-07 05:46:40.629146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.629448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.629457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.609 qpair failed and we were unable to recover it. 00:31:37.609 [2024-12-07 05:46:40.629765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.630083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.630094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.609 qpair failed and we were unable to recover it. 00:31:37.609 [2024-12-07 05:46:40.630259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.630570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.630579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.609 qpair failed and we were unable to recover it. 00:31:37.609 [2024-12-07 05:46:40.630869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.631162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.631172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.609 qpair failed and we were unable to recover it. 00:31:37.609 [2024-12-07 05:46:40.631468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.631624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.631634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.609 qpair failed and we were unable to recover it. 00:31:37.609 [2024-12-07 05:46:40.631964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.632268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.632278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.609 qpair failed and we were unable to recover it. 00:31:37.609 [2024-12-07 05:46:40.632580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.632892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.632901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.609 qpair failed and we were unable to recover it. 00:31:37.609 [2024-12-07 05:46:40.633188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.633512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.633521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.609 qpair failed and we were unable to recover it. 00:31:37.609 [2024-12-07 05:46:40.633811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.634125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.634136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.609 qpair failed and we were unable to recover it. 00:31:37.609 [2024-12-07 05:46:40.634443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.634760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.634769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.609 qpair failed and we were unable to recover it. 00:31:37.609 [2024-12-07 05:46:40.635079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.635404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.635414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.609 qpair failed and we were unable to recover it. 00:31:37.609 [2024-12-07 05:46:40.635718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.636027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.636037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.609 qpair failed and we were unable to recover it. 00:31:37.609 [2024-12-07 05:46:40.636441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.636733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.636743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.609 qpair failed and we were unable to recover it. 00:31:37.609 [2024-12-07 05:46:40.637047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.637366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.609 [2024-12-07 05:46:40.637376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.609 qpair failed and we were unable to recover it. 00:31:37.609 [2024-12-07 05:46:40.637688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.637984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.637993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.610 qpair failed and we were unable to recover it. 00:31:37.610 [2024-12-07 05:46:40.638306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.638644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.638654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.610 qpair failed and we were unable to recover it. 00:31:37.610 [2024-12-07 05:46:40.638882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.639190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.639200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.610 qpair failed and we were unable to recover it. 00:31:37.610 [2024-12-07 05:46:40.639491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.639805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.639815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.610 qpair failed and we were unable to recover it. 00:31:37.610 [2024-12-07 05:46:40.640098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.640419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.640429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.610 qpair failed and we were unable to recover it. 00:31:37.610 [2024-12-07 05:46:40.640694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.641073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.641083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.610 qpair failed and we were unable to recover it. 00:31:37.610 [2024-12-07 05:46:40.641388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.641690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.641700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.610 qpair failed and we were unable to recover it. 00:31:37.610 [2024-12-07 05:46:40.642028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.642327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.642338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.610 qpair failed and we were unable to recover it. 00:31:37.610 [2024-12-07 05:46:40.642512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.642692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.642703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.610 qpair failed and we were unable to recover it. 00:31:37.610 [2024-12-07 05:46:40.643026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.643346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.643356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.610 qpair failed and we were unable to recover it. 00:31:37.610 [2024-12-07 05:46:40.643637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.643930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.643941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.610 qpair failed and we were unable to recover it. 00:31:37.610 [2024-12-07 05:46:40.644253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.644546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.644556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.610 qpair failed and we were unable to recover it. 00:31:37.610 [2024-12-07 05:46:40.644885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.645179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.645192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.610 qpair failed and we were unable to recover it. 00:31:37.610 [2024-12-07 05:46:40.645481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.645750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.645760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.610 qpair failed and we were unable to recover it. 00:31:37.610 [2024-12-07 05:46:40.646036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.646316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.646326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.610 qpair failed and we were unable to recover it. 00:31:37.610 [2024-12-07 05:46:40.646530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.646894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.646903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.610 qpair failed and we were unable to recover it. 00:31:37.610 [2024-12-07 05:46:40.647187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.647400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.647410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.610 qpair failed and we were unable to recover it. 00:31:37.610 [2024-12-07 05:46:40.647672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.647951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.647970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.610 qpair failed and we were unable to recover it. 00:31:37.610 [2024-12-07 05:46:40.648142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.648435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.648444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.610 qpair failed and we were unable to recover it. 00:31:37.610 [2024-12-07 05:46:40.648726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.649024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.649035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.610 qpair failed and we were unable to recover it. 00:31:37.610 [2024-12-07 05:46:40.649344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.649654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.649663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.610 qpair failed and we were unable to recover it. 00:31:37.610 [2024-12-07 05:46:40.649962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.650274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.650284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.610 qpair failed and we were unable to recover it. 00:31:37.610 [2024-12-07 05:46:40.650470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.650682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.650691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.610 qpair failed and we were unable to recover it. 00:31:37.610 [2024-12-07 05:46:40.651002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.651219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.610 [2024-12-07 05:46:40.651229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.611 qpair failed and we were unable to recover it. 00:31:37.611 [2024-12-07 05:46:40.651575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.651900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.651910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.611 qpair failed and we were unable to recover it. 00:31:37.611 [2024-12-07 05:46:40.652184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.652484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.652494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.611 qpair failed and we were unable to recover it. 00:31:37.611 [2024-12-07 05:46:40.652799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.653140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.653150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.611 qpair failed and we were unable to recover it. 00:31:37.611 [2024-12-07 05:46:40.653307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.653575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.653585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.611 qpair failed and we were unable to recover it. 00:31:37.611 [2024-12-07 05:46:40.653885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.654180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.654198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.611 qpair failed and we were unable to recover it. 00:31:37.611 [2024-12-07 05:46:40.654510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.654800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.654809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.611 qpair failed and we were unable to recover it. 00:31:37.611 [2024-12-07 05:46:40.655101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.655309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.655319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.611 qpair failed and we were unable to recover it. 00:31:37.611 [2024-12-07 05:46:40.655618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.655928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.655938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.611 qpair failed and we were unable to recover it. 00:31:37.611 [2024-12-07 05:46:40.656253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.656567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.656577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.611 qpair failed and we were unable to recover it. 00:31:37.611 [2024-12-07 05:46:40.656886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.657196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.657207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.611 qpair failed and we were unable to recover it. 00:31:37.611 [2024-12-07 05:46:40.657526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.657836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.657846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.611 qpair failed and we were unable to recover it. 00:31:37.611 [2024-12-07 05:46:40.658169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.658481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.658491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.611 qpair failed and we were unable to recover it. 00:31:37.611 [2024-12-07 05:46:40.658836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.659047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.659056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.611 qpair failed and we were unable to recover it. 00:31:37.611 [2024-12-07 05:46:40.659382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.659591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.659601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.611 qpair failed and we were unable to recover it. 00:31:37.611 [2024-12-07 05:46:40.659878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.660188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.660198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.611 qpair failed and we were unable to recover it. 00:31:37.611 [2024-12-07 05:46:40.660395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.660648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.660657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.611 qpair failed and we were unable to recover it. 00:31:37.611 [2024-12-07 05:46:40.660849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.661145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.661155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.611 qpair failed and we were unable to recover it. 00:31:37.611 [2024-12-07 05:46:40.661378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.661697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.661708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.611 qpair failed and we were unable to recover it. 00:31:37.611 [2024-12-07 05:46:40.662005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.662152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.662161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.611 qpair failed and we were unable to recover it. 00:31:37.611 [2024-12-07 05:46:40.662539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.662856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.662866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.611 qpair failed and we were unable to recover it. 00:31:37.611 [2024-12-07 05:46:40.663175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.663489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.663498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.611 qpair failed and we were unable to recover it. 00:31:37.611 [2024-12-07 05:46:40.663804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.664091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.664101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.611 qpair failed and we were unable to recover it. 00:31:37.611 [2024-12-07 05:46:40.664428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.664763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.611 [2024-12-07 05:46:40.664772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.611 qpair failed and we were unable to recover it. 00:31:37.612 [2024-12-07 05:46:40.665063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.665346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.665355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.612 qpair failed and we were unable to recover it. 00:31:37.612 [2024-12-07 05:46:40.665670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.665980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.665990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.612 qpair failed and we were unable to recover it. 00:31:37.612 [2024-12-07 05:46:40.666287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.666574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.666584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.612 qpair failed and we were unable to recover it. 00:31:37.612 [2024-12-07 05:46:40.666910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.667230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.667239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.612 qpair failed and we were unable to recover it. 00:31:37.612 [2024-12-07 05:46:40.667556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.667850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.667859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.612 qpair failed and we were unable to recover it. 00:31:37.612 [2024-12-07 05:46:40.668164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.668464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.668474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.612 qpair failed and we were unable to recover it. 00:31:37.612 [2024-12-07 05:46:40.668776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.669109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.669120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.612 qpair failed and we were unable to recover it. 00:31:37.612 [2024-12-07 05:46:40.669248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.669527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.669536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.612 qpair failed and we were unable to recover it. 00:31:37.612 [2024-12-07 05:46:40.669844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.670033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.670042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.612 qpair failed and we were unable to recover it. 00:31:37.612 [2024-12-07 05:46:40.670375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.670706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.670715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.612 qpair failed and we were unable to recover it. 00:31:37.612 [2024-12-07 05:46:40.670983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.671201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.671211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.612 qpair failed and we were unable to recover it. 00:31:37.612 [2024-12-07 05:46:40.671404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.671776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.671785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.612 qpair failed and we were unable to recover it. 00:31:37.612 [2024-12-07 05:46:40.672092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.672296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.672306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.612 qpair failed and we were unable to recover it. 00:31:37.612 [2024-12-07 05:46:40.672630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.673006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.673021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.612 qpair failed and we were unable to recover it. 00:31:37.612 [2024-12-07 05:46:40.673358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.673660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.673669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.612 qpair failed and we were unable to recover it. 00:31:37.612 [2024-12-07 05:46:40.673955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.674143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.674154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.612 qpair failed and we were unable to recover it. 00:31:37.612 [2024-12-07 05:46:40.674473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.674786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.674798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.612 qpair failed and we were unable to recover it. 00:31:37.612 [2024-12-07 05:46:40.675026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.675350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.675360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.612 qpair failed and we were unable to recover it. 00:31:37.612 [2024-12-07 05:46:40.675645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.675957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.675966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.612 qpair failed and we were unable to recover it. 00:31:37.612 [2024-12-07 05:46:40.676054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.676331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.676341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.612 qpair failed and we were unable to recover it. 00:31:37.612 [2024-12-07 05:46:40.676654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.676973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.676983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.612 qpair failed and we were unable to recover it. 00:31:37.612 [2024-12-07 05:46:40.677343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.677641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.677651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.612 qpair failed and we were unable to recover it. 00:31:37.612 [2024-12-07 05:46:40.677929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.678237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.678247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.612 qpair failed and we were unable to recover it. 00:31:37.612 [2024-12-07 05:46:40.678524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.678844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.678853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.612 qpair failed and we were unable to recover it. 00:31:37.612 [2024-12-07 05:46:40.679138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.612 [2024-12-07 05:46:40.679465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.679474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.613 qpair failed and we were unable to recover it. 00:31:37.613 [2024-12-07 05:46:40.679757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.680075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.680085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.613 qpair failed and we were unable to recover it. 00:31:37.613 [2024-12-07 05:46:40.680410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.680723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.680733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.613 qpair failed and we were unable to recover it. 00:31:37.613 [2024-12-07 05:46:40.681066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.681394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.681405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.613 qpair failed and we were unable to recover it. 00:31:37.613 [2024-12-07 05:46:40.681704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.681982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.681998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.613 qpair failed and we were unable to recover it. 00:31:37.613 [2024-12-07 05:46:40.682289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.682438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.682448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.613 qpair failed and we were unable to recover it. 00:31:37.613 [2024-12-07 05:46:40.682771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.683075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.683085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.613 qpair failed and we were unable to recover it. 00:31:37.613 [2024-12-07 05:46:40.683409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.683714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.683723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.613 qpair failed and we were unable to recover it. 00:31:37.613 [2024-12-07 05:46:40.684022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.684347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.684356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.613 qpair failed and we were unable to recover it. 00:31:37.613 [2024-12-07 05:46:40.684533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.684797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.684807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.613 qpair failed and we were unable to recover it. 00:31:37.613 [2024-12-07 05:46:40.685111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.685418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.685428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.613 qpair failed and we were unable to recover it. 00:31:37.613 [2024-12-07 05:46:40.685604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.685914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.685923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.613 qpair failed and we were unable to recover it. 00:31:37.613 [2024-12-07 05:46:40.686219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.686517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.686526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.613 qpair failed and we were unable to recover it. 00:31:37.613 [2024-12-07 05:46:40.686820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.687125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.687135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.613 qpair failed and we were unable to recover it. 00:31:37.613 [2024-12-07 05:46:40.687436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.687740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.687749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.613 qpair failed and we were unable to recover it. 00:31:37.613 [2024-12-07 05:46:40.688056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.688347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.688357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.613 qpair failed and we were unable to recover it. 00:31:37.613 [2024-12-07 05:46:40.688552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.688885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.688894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.613 qpair failed and we were unable to recover it. 00:31:37.613 [2024-12-07 05:46:40.689132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.689323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.689334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.613 qpair failed and we were unable to recover it. 00:31:37.613 [2024-12-07 05:46:40.689630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.689927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.613 [2024-12-07 05:46:40.689936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.613 qpair failed and we were unable to recover it. 00:31:37.613 [2024-12-07 05:46:40.690225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.690540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.690549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.614 qpair failed and we were unable to recover it. 00:31:37.614 [2024-12-07 05:46:40.690834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.691126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.691136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.614 qpair failed and we were unable to recover it. 00:31:37.614 [2024-12-07 05:46:40.691447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.691738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.691748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.614 qpair failed and we were unable to recover it. 00:31:37.614 [2024-12-07 05:46:40.692022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.692313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.692322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.614 qpair failed and we were unable to recover it. 00:31:37.614 [2024-12-07 05:46:40.692611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.692970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.692980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.614 qpair failed and we were unable to recover it. 00:31:37.614 [2024-12-07 05:46:40.693292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.693572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.693581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.614 qpair failed and we were unable to recover it. 00:31:37.614 [2024-12-07 05:46:40.693780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.694019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.694028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.614 qpair failed and we were unable to recover it. 00:31:37.614 [2024-12-07 05:46:40.694355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.694645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.694654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.614 qpair failed and we were unable to recover it. 00:31:37.614 [2024-12-07 05:46:40.694960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.695275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.695285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.614 qpair failed and we were unable to recover it. 00:31:37.614 [2024-12-07 05:46:40.695554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.695872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.695882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.614 qpair failed and we were unable to recover it. 00:31:37.614 [2024-12-07 05:46:40.696183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.696351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.696361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.614 qpair failed and we were unable to recover it. 00:31:37.614 [2024-12-07 05:46:40.696663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.696945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.696954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.614 qpair failed and we were unable to recover it. 00:31:37.614 [2024-12-07 05:46:40.697147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.697468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.697477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.614 qpair failed and we were unable to recover it. 00:31:37.614 [2024-12-07 05:46:40.697680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.697863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.697873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.614 qpair failed and we were unable to recover it. 00:31:37.614 [2024-12-07 05:46:40.698185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.698519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.698531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.614 qpair failed and we were unable to recover it. 00:31:37.614 [2024-12-07 05:46:40.698858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.699182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.699192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.614 qpair failed and we were unable to recover it. 00:31:37.614 [2024-12-07 05:46:40.699473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.699793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.699802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.614 qpair failed and we were unable to recover it. 00:31:37.614 [2024-12-07 05:46:40.700109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.700439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.700450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.614 qpair failed and we were unable to recover it. 00:31:37.614 [2024-12-07 05:46:40.700653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.700984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.701000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.614 qpair failed and we were unable to recover it. 00:31:37.614 [2024-12-07 05:46:40.701309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.701625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.701636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.614 qpair failed and we were unable to recover it. 00:31:37.614 [2024-12-07 05:46:40.701930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.702204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.702214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.614 qpair failed and we were unable to recover it. 00:31:37.614 [2024-12-07 05:46:40.702576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.702845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.702854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.614 qpair failed and we were unable to recover it. 00:31:37.614 [2024-12-07 05:46:40.703029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.703280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.703290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.614 qpair failed and we were unable to recover it. 00:31:37.614 [2024-12-07 05:46:40.703597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.614 [2024-12-07 05:46:40.703927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.703937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.615 qpair failed and we were unable to recover it. 00:31:37.615 [2024-12-07 05:46:40.704254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.704568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.704577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.615 qpair failed and we were unable to recover it. 00:31:37.615 [2024-12-07 05:46:40.704763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.705136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.705146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.615 qpair failed and we were unable to recover it. 00:31:37.615 [2024-12-07 05:46:40.705435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.705712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.705721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.615 qpair failed and we were unable to recover it. 00:31:37.615 [2024-12-07 05:46:40.706027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.706245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.706255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.615 qpair failed and we were unable to recover it. 00:31:37.615 [2024-12-07 05:46:40.706547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.706876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.706886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.615 qpair failed and we were unable to recover it. 00:31:37.615 [2024-12-07 05:46:40.707181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.707504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.707513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.615 qpair failed and we were unable to recover it. 00:31:37.615 [2024-12-07 05:46:40.707791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.707998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.708007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.615 qpair failed and we were unable to recover it. 00:31:37.615 [2024-12-07 05:46:40.708367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.708674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.708684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.615 qpair failed and we were unable to recover it. 00:31:37.615 [2024-12-07 05:46:40.708966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.709270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.709280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.615 qpair failed and we were unable to recover it. 00:31:37.615 [2024-12-07 05:46:40.709468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.709820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.709829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.615 qpair failed and we were unable to recover it. 00:31:37.615 [2024-12-07 05:46:40.710153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.710493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.710503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.615 qpair failed and we were unable to recover it. 00:31:37.615 [2024-12-07 05:46:40.710813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.711125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.711135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.615 qpair failed and we were unable to recover it. 00:31:37.615 [2024-12-07 05:46:40.711424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.711749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.711758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.615 qpair failed and we were unable to recover it. 00:31:37.615 [2024-12-07 05:46:40.712065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.712397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.712407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.615 qpair failed and we were unable to recover it. 00:31:37.615 [2024-12-07 05:46:40.712712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.713041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.713050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.615 qpair failed and we were unable to recover it. 00:31:37.615 [2024-12-07 05:46:40.713284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.713606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.713616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.615 qpair failed and we were unable to recover it. 00:31:37.615 [2024-12-07 05:46:40.713803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.713985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.713995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.615 qpair failed and we were unable to recover it. 00:31:37.615 [2024-12-07 05:46:40.714238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.714505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.714514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.615 qpair failed and we were unable to recover it. 00:31:37.615 [2024-12-07 05:46:40.714843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.715150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.715159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.615 qpair failed and we were unable to recover it. 00:31:37.615 [2024-12-07 05:46:40.715358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.715680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.715689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.615 qpair failed and we were unable to recover it. 00:31:37.615 [2024-12-07 05:46:40.715996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.716301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.716311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.615 qpair failed and we were unable to recover it. 00:31:37.615 [2024-12-07 05:46:40.716595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.716918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.716927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.615 qpair failed and we were unable to recover it. 00:31:37.615 [2024-12-07 05:46:40.717246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.717562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.615 [2024-12-07 05:46:40.717571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.615 qpair failed and we were unable to recover it. 00:31:37.616 [2024-12-07 05:46:40.717838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.718136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.718146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.616 qpair failed and we were unable to recover it. 00:31:37.616 [2024-12-07 05:46:40.718458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.718773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.718782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.616 qpair failed and we were unable to recover it. 00:31:37.616 [2024-12-07 05:46:40.719060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.719370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.719379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.616 qpair failed and we were unable to recover it. 00:31:37.616 [2024-12-07 05:46:40.719685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.719991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.720001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.616 qpair failed and we were unable to recover it. 00:31:37.616 [2024-12-07 05:46:40.720310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.720625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.720635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.616 qpair failed and we were unable to recover it. 00:31:37.616 [2024-12-07 05:46:40.720937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.721242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.721252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.616 qpair failed and we were unable to recover it. 00:31:37.616 [2024-12-07 05:46:40.721479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.721825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.721834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.616 qpair failed and we were unable to recover it. 00:31:37.616 [2024-12-07 05:46:40.722035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.722373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.722382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.616 qpair failed and we were unable to recover it. 00:31:37.616 [2024-12-07 05:46:40.722687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.723025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.723035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.616 qpair failed and we were unable to recover it. 00:31:37.616 [2024-12-07 05:46:40.723190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.723463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.723473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.616 qpair failed and we were unable to recover it. 00:31:37.616 [2024-12-07 05:46:40.723675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.724029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.724039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.616 qpair failed and we were unable to recover it. 00:31:37.616 [2024-12-07 05:46:40.724329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.724640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.724650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.616 qpair failed and we were unable to recover it. 00:31:37.616 [2024-12-07 05:46:40.724958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.725247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.725256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.616 qpair failed and we were unable to recover it. 00:31:37.616 [2024-12-07 05:46:40.725538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.725859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.725868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.616 qpair failed and we were unable to recover it. 00:31:37.616 [2024-12-07 05:46:40.726182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.726479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.726489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.616 qpair failed and we were unable to recover it. 00:31:37.616 [2024-12-07 05:46:40.726783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.727090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.727100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.616 qpair failed and we were unable to recover it. 00:31:37.616 [2024-12-07 05:46:40.727404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.727706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.727715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.616 qpair failed and we were unable to recover it. 00:31:37.616 [2024-12-07 05:46:40.727998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.728323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.728333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.616 qpair failed and we were unable to recover it. 00:31:37.616 [2024-12-07 05:46:40.728609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.728800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.728812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.616 qpair failed and we were unable to recover it. 00:31:37.616 [2024-12-07 05:46:40.728997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.729273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.729282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.616 qpair failed and we were unable to recover it. 00:31:37.616 [2024-12-07 05:46:40.729602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.729909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.729919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.616 qpair failed and we were unable to recover it. 00:31:37.616 [2024-12-07 05:46:40.730230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.730520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.730529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.616 qpair failed and we were unable to recover it. 00:31:37.616 [2024-12-07 05:46:40.730923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.731208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.731220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.616 qpair failed and we were unable to recover it. 00:31:37.616 [2024-12-07 05:46:40.731347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.731611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.731620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.616 qpair failed and we were unable to recover it. 00:31:37.616 [2024-12-07 05:46:40.731920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.616 [2024-12-07 05:46:40.732193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.732202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.617 qpair failed and we were unable to recover it. 00:31:37.617 [2024-12-07 05:46:40.732485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.732878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.732887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.617 qpair failed and we were unable to recover it. 00:31:37.617 [2024-12-07 05:46:40.733074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.733381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.733389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.617 qpair failed and we were unable to recover it. 00:31:37.617 [2024-12-07 05:46:40.733713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.734041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.734051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.617 qpair failed and we were unable to recover it. 00:31:37.617 [2024-12-07 05:46:40.734359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.734672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.734683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.617 qpair failed and we were unable to recover it. 00:31:37.617 [2024-12-07 05:46:40.734974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.735256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.735267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.617 qpair failed and we were unable to recover it. 00:31:37.617 [2024-12-07 05:46:40.735475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.735634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.735644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.617 qpair failed and we were unable to recover it. 00:31:37.617 [2024-12-07 05:46:40.735995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.736254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.736264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.617 qpair failed and we were unable to recover it. 00:31:37.617 [2024-12-07 05:46:40.736613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.736894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.736904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.617 qpair failed and we were unable to recover it. 00:31:37.617 [2024-12-07 05:46:40.737222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.737497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.737506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.617 qpair failed and we were unable to recover it. 00:31:37.617 [2024-12-07 05:46:40.737793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.738107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.738118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.617 qpair failed and we were unable to recover it. 00:31:37.617 [2024-12-07 05:46:40.738445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.738732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.738742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.617 qpair failed and we were unable to recover it. 00:31:37.617 [2024-12-07 05:46:40.739061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.739369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.739379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.617 qpair failed and we were unable to recover it. 00:31:37.617 [2024-12-07 05:46:40.739661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.739981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.739991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.617 qpair failed and we were unable to recover it. 00:31:37.617 [2024-12-07 05:46:40.740215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.740517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.740526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.617 qpair failed and we were unable to recover it. 00:31:37.617 [2024-12-07 05:46:40.740828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.741158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.741168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.617 qpair failed and we were unable to recover it. 00:31:37.617 [2024-12-07 05:46:40.741483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.741767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.741776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.617 qpair failed and we were unable to recover it. 00:31:37.617 [2024-12-07 05:46:40.742123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.742431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.742441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.617 qpair failed and we were unable to recover it. 00:31:37.617 [2024-12-07 05:46:40.742748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.743033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.743043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.617 qpair failed and we were unable to recover it. 00:31:37.617 [2024-12-07 05:46:40.743354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.743627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.743636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.617 qpair failed and we were unable to recover it. 00:31:37.617 [2024-12-07 05:46:40.743888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.744164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.744174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.617 qpair failed and we were unable to recover it. 00:31:37.617 [2024-12-07 05:46:40.744469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.744788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.744797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.617 qpair failed and we were unable to recover it. 00:31:37.617 [2024-12-07 05:46:40.745116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.745465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.745474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.617 qpair failed and we were unable to recover it. 00:31:37.617 [2024-12-07 05:46:40.745782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.746070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.617 [2024-12-07 05:46:40.746080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.617 qpair failed and we were unable to recover it. 00:31:37.618 [2024-12-07 05:46:40.746394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.746618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.746628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.618 qpair failed and we were unable to recover it. 00:31:37.618 [2024-12-07 05:46:40.746934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.747122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.747132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.618 qpair failed and we were unable to recover it. 00:31:37.618 [2024-12-07 05:46:40.747449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.747768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.747778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.618 qpair failed and we were unable to recover it. 00:31:37.618 [2024-12-07 05:46:40.748084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.748384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.748402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.618 qpair failed and we were unable to recover it. 00:31:37.618 [2024-12-07 05:46:40.748611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.748790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.748799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.618 qpair failed and we were unable to recover it. 00:31:37.618 [2024-12-07 05:46:40.749032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.749364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.749373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.618 qpair failed and we were unable to recover it. 00:31:37.618 [2024-12-07 05:46:40.749659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.749944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.749953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.618 qpair failed and we were unable to recover it. 00:31:37.618 [2024-12-07 05:46:40.750198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.750501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.750510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.618 qpair failed and we were unable to recover it. 00:31:37.618 [2024-12-07 05:46:40.750796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.751126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.751136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.618 qpair failed and we were unable to recover it. 00:31:37.618 [2024-12-07 05:46:40.751345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.751719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.751729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.618 qpair failed and we were unable to recover it. 00:31:37.618 [2024-12-07 05:46:40.752037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.752357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.752367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.618 qpair failed and we were unable to recover it. 00:31:37.618 [2024-12-07 05:46:40.752672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.753005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.753026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.618 qpair failed and we were unable to recover it. 00:31:37.618 [2024-12-07 05:46:40.753329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.753613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.753623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.618 qpair failed and we were unable to recover it. 00:31:37.618 [2024-12-07 05:46:40.753901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.754190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.754200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.618 qpair failed and we were unable to recover it. 00:31:37.618 [2024-12-07 05:46:40.754441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.754658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.754667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.618 qpair failed and we were unable to recover it. 00:31:37.618 [2024-12-07 05:46:40.754992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.755315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.755325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.618 qpair failed and we were unable to recover it. 00:31:37.618 [2024-12-07 05:46:40.755610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.755918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.755928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.618 qpair failed and we were unable to recover it. 00:31:37.618 [2024-12-07 05:46:40.756241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.756519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.756528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.618 qpair failed and we were unable to recover it. 00:31:37.618 [2024-12-07 05:46:40.756714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.756916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.756926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.618 qpair failed and we were unable to recover it. 00:31:37.618 [2024-12-07 05:46:40.757123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.757310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.757322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.618 qpair failed and we were unable to recover it. 00:31:37.618 [2024-12-07 05:46:40.757631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.757927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.757937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.618 qpair failed and we were unable to recover it. 00:31:37.618 [2024-12-07 05:46:40.758317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.758610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.758622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.618 qpair failed and we were unable to recover it. 00:31:37.618 [2024-12-07 05:46:40.758943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.759265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.618 [2024-12-07 05:46:40.759275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.618 qpair failed and we were unable to recover it. 00:31:37.618 [2024-12-07 05:46:40.759574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.759890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.759900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.619 qpair failed and we were unable to recover it. 00:31:37.619 [2024-12-07 05:46:40.760183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.760478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.760489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.619 qpair failed and we were unable to recover it. 00:31:37.619 [2024-12-07 05:46:40.760773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.760974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.760984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.619 qpair failed and we were unable to recover it. 00:31:37.619 [2024-12-07 05:46:40.761263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.761607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.761618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.619 qpair failed and we were unable to recover it. 00:31:37.619 [2024-12-07 05:46:40.761922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.762228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.762237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.619 qpair failed and we were unable to recover it. 00:31:37.619 [2024-12-07 05:46:40.762548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.762825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.762834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.619 qpair failed and we were unable to recover it. 00:31:37.619 [2024-12-07 05:46:40.763120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.763319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.763329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.619 qpair failed and we were unable to recover it. 00:31:37.619 [2024-12-07 05:46:40.763521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.763685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.763694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.619 qpair failed and we were unable to recover it. 00:31:37.619 [2024-12-07 05:46:40.764061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.764363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.764372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.619 qpair failed and we were unable to recover it. 00:31:37.619 [2024-12-07 05:46:40.764699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.765034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.765044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.619 qpair failed and we were unable to recover it. 00:31:37.619 [2024-12-07 05:46:40.765238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.765507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.765516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.619 qpair failed and we were unable to recover it. 00:31:37.619 [2024-12-07 05:46:40.765833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.766210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.766220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.619 qpair failed and we were unable to recover it. 00:31:37.619 [2024-12-07 05:46:40.766499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.766782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.766791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.619 qpair failed and we were unable to recover it. 00:31:37.619 [2024-12-07 05:46:40.767117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.767422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.767431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.619 qpair failed and we were unable to recover it. 00:31:37.619 [2024-12-07 05:46:40.767747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.768032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.768042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.619 qpair failed and we were unable to recover it. 00:31:37.619 [2024-12-07 05:46:40.768358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.768636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.768645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.619 qpair failed and we were unable to recover it. 00:31:37.619 [2024-12-07 05:46:40.768960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.769260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.769270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.619 qpair failed and we were unable to recover it. 00:31:37.619 [2024-12-07 05:46:40.769590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.769907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.769917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.619 qpair failed and we were unable to recover it. 00:31:37.619 [2024-12-07 05:46:40.770194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.770423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.770432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.619 qpair failed and we were unable to recover it. 00:31:37.619 [2024-12-07 05:46:40.770744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.771061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.619 [2024-12-07 05:46:40.771071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.620 qpair failed and we were unable to recover it. 00:31:37.620 [2024-12-07 05:46:40.771365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.771650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.771660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.620 qpair failed and we were unable to recover it. 00:31:37.620 [2024-12-07 05:46:40.771958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.772278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.772288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.620 qpair failed and we were unable to recover it. 00:31:37.620 [2024-12-07 05:46:40.772625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.772907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.772916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.620 qpair failed and we were unable to recover it. 00:31:37.620 [2024-12-07 05:46:40.773202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.773508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.773517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.620 qpair failed and we were unable to recover it. 00:31:37.620 [2024-12-07 05:46:40.773823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.774150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.774160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.620 qpair failed and we were unable to recover it. 00:31:37.620 [2024-12-07 05:46:40.774475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.774759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.774768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.620 qpair failed and we were unable to recover it. 00:31:37.620 [2024-12-07 05:46:40.775151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.775462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.775471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.620 qpair failed and we were unable to recover it. 00:31:37.620 [2024-12-07 05:46:40.775742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.775930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.775939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.620 qpair failed and we were unable to recover it. 00:31:37.620 [2024-12-07 05:46:40.776251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.776574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.776583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.620 qpair failed and we were unable to recover it. 00:31:37.620 [2024-12-07 05:46:40.776886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.777098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.777108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.620 qpair failed and we were unable to recover it. 00:31:37.620 [2024-12-07 05:46:40.777405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.777724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.777733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.620 qpair failed and we were unable to recover it. 00:31:37.620 [2024-12-07 05:46:40.777936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.778280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.778290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.620 qpair failed and we were unable to recover it. 00:31:37.620 [2024-12-07 05:46:40.778613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.778923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.778933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.620 qpair failed and we were unable to recover it. 00:31:37.620 [2024-12-07 05:46:40.779258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.779571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.779580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.620 qpair failed and we were unable to recover it. 00:31:37.620 [2024-12-07 05:46:40.779906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.780119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.780128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.620 qpair failed and we were unable to recover it. 00:31:37.620 [2024-12-07 05:46:40.780388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.780709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.780720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.620 qpair failed and we were unable to recover it. 00:31:37.620 [2024-12-07 05:46:40.781018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.781404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.781413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.620 qpair failed and we were unable to recover it. 00:31:37.620 [2024-12-07 05:46:40.781742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.782078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.782087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.620 qpair failed and we were unable to recover it. 00:31:37.620 [2024-12-07 05:46:40.782300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.782607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.782617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.620 qpair failed and we were unable to recover it. 00:31:37.620 [2024-12-07 05:46:40.782839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.783138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.783151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.620 qpair failed and we were unable to recover it. 00:31:37.620 [2024-12-07 05:46:40.783463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.783666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.783676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.620 qpair failed and we were unable to recover it. 00:31:37.620 [2024-12-07 05:46:40.783998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.784321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.784330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.620 qpair failed and we were unable to recover it. 00:31:37.620 [2024-12-07 05:46:40.784609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.784929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.784938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.620 qpair failed and we were unable to recover it. 00:31:37.620 [2024-12-07 05:46:40.785226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.620 [2024-12-07 05:46:40.785537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.785546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.621 qpair failed and we were unable to recover it. 00:31:37.621 [2024-12-07 05:46:40.785829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.786148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.786158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.621 qpair failed and we were unable to recover it. 00:31:37.621 [2024-12-07 05:46:40.786478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.786804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.786813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.621 qpair failed and we were unable to recover it. 00:31:37.621 [2024-12-07 05:46:40.787098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.787391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.787401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.621 qpair failed and we were unable to recover it. 00:31:37.621 [2024-12-07 05:46:40.787710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.788038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.788049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.621 qpair failed and we were unable to recover it. 00:31:37.621 [2024-12-07 05:46:40.788360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.788693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.788703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.621 qpair failed and we were unable to recover it. 00:31:37.621 [2024-12-07 05:46:40.788985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.789290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.789299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.621 qpair failed and we were unable to recover it. 00:31:37.621 [2024-12-07 05:46:40.789608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.789780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.789790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.621 qpair failed and we were unable to recover it. 00:31:37.621 [2024-12-07 05:46:40.790078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.790259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.790268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.621 qpair failed and we were unable to recover it. 00:31:37.621 [2024-12-07 05:46:40.790596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.790909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.790919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.621 qpair failed and we were unable to recover it. 00:31:37.621 [2024-12-07 05:46:40.791225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.791537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.791547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.621 qpair failed and we were unable to recover it. 00:31:37.621 [2024-12-07 05:46:40.791830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.792144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.792154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.621 qpair failed and we were unable to recover it. 00:31:37.621 [2024-12-07 05:46:40.792468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.792799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.792808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.621 qpair failed and we were unable to recover it. 00:31:37.621 [2024-12-07 05:46:40.793113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.793418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.793427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.621 qpair failed and we were unable to recover it. 00:31:37.621 [2024-12-07 05:46:40.793711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.794036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.794047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.621 qpair failed and we were unable to recover it. 00:31:37.621 [2024-12-07 05:46:40.794359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.794653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.794669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.621 qpair failed and we were unable to recover it. 00:31:37.621 [2024-12-07 05:46:40.795026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.795305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.795315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.621 qpair failed and we were unable to recover it. 00:31:37.621 [2024-12-07 05:46:40.795626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.795821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.795831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.621 qpair failed and we were unable to recover it. 00:31:37.621 [2024-12-07 05:46:40.796120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.796427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.796437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.621 qpair failed and we were unable to recover it. 00:31:37.621 [2024-12-07 05:46:40.796782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.797075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.797085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.621 qpair failed and we were unable to recover it. 00:31:37.621 [2024-12-07 05:46:40.797416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.797697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.797706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.621 qpair failed and we were unable to recover it. 00:31:37.621 [2024-12-07 05:46:40.797958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.798261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.798270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.621 qpair failed and we were unable to recover it. 00:31:37.621 [2024-12-07 05:46:40.798611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.798913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.798923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.621 qpair failed and we were unable to recover it. 00:31:37.621 [2024-12-07 05:46:40.799214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.799532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.799541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.621 qpair failed and we were unable to recover it. 00:31:37.621 [2024-12-07 05:46:40.799851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.621 [2024-12-07 05:46:40.800185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.800195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.622 qpair failed and we were unable to recover it. 00:31:37.622 [2024-12-07 05:46:40.800389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.800649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.800659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.622 qpair failed and we were unable to recover it. 00:31:37.622 [2024-12-07 05:46:40.800974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.801156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.801166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.622 qpair failed and we were unable to recover it. 00:31:37.622 [2024-12-07 05:46:40.801411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.801734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.801744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.622 qpair failed and we were unable to recover it. 00:31:37.622 [2024-12-07 05:46:40.801950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.802146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.802155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.622 qpair failed and we were unable to recover it. 00:31:37.622 [2024-12-07 05:46:40.802367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.802533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.802544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.622 qpair failed and we were unable to recover it. 00:31:37.622 [2024-12-07 05:46:40.802799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.803075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.803085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.622 qpair failed and we were unable to recover it. 00:31:37.622 [2024-12-07 05:46:40.803400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.803723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.803733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.622 qpair failed and we were unable to recover it. 00:31:37.622 [2024-12-07 05:46:40.804055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.804327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.804336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.622 qpair failed and we were unable to recover it. 00:31:37.622 [2024-12-07 05:46:40.804655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.804857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.804866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.622 qpair failed and we were unable to recover it. 00:31:37.622 [2024-12-07 05:46:40.805161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.805455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.805465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.622 qpair failed and we were unable to recover it. 00:31:37.622 [2024-12-07 05:46:40.805651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.806000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.806017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.622 qpair failed and we were unable to recover it. 00:31:37.622 [2024-12-07 05:46:40.806409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.806704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.806714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.622 qpair failed and we were unable to recover it. 00:31:37.622 [2024-12-07 05:46:40.806999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.807316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.807326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.622 qpair failed and we were unable to recover it. 00:31:37.622 [2024-12-07 05:46:40.807599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.807919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.807929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.622 qpair failed and we were unable to recover it. 00:31:37.622 [2024-12-07 05:46:40.808243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.808571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.808581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.622 qpair failed and we were unable to recover it. 00:31:37.622 [2024-12-07 05:46:40.808962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.809270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.809280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.622 qpair failed and we were unable to recover it. 00:31:37.622 [2024-12-07 05:46:40.809594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.809925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.809936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.622 qpair failed and we were unable to recover it. 00:31:37.622 [2024-12-07 05:46:40.810146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.810355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.810364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.622 qpair failed and we were unable to recover it. 00:31:37.622 [2024-12-07 05:46:40.810579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.810878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.810887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.622 qpair failed and we were unable to recover it. 00:31:37.622 [2024-12-07 05:46:40.811107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.811367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.811377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.622 qpair failed and we were unable to recover it. 00:31:37.622 [2024-12-07 05:46:40.811538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.811854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.811863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.622 qpair failed and we were unable to recover it. 00:31:37.622 [2024-12-07 05:46:40.812069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.812299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.812309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.622 qpair failed and we were unable to recover it. 00:31:37.622 [2024-12-07 05:46:40.812608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.812803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.622 [2024-12-07 05:46:40.812815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.622 qpair failed and we were unable to recover it. 00:31:37.623 [2024-12-07 05:46:40.813087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.813281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.813290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.623 qpair failed and we were unable to recover it. 00:31:37.623 [2024-12-07 05:46:40.813565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.813875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.813885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.623 qpair failed and we were unable to recover it. 00:31:37.623 [2024-12-07 05:46:40.814182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.814487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.814497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.623 qpair failed and we were unable to recover it. 00:31:37.623 [2024-12-07 05:46:40.814800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.815179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.815188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.623 qpair failed and we were unable to recover it. 00:31:37.623 [2024-12-07 05:46:40.815500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.815821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.815830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.623 qpair failed and we were unable to recover it. 00:31:37.623 [2024-12-07 05:46:40.816205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.816461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.816470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.623 qpair failed and we were unable to recover it. 00:31:37.623 [2024-12-07 05:46:40.816851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.817191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.817201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.623 qpair failed and we were unable to recover it. 00:31:37.623 [2024-12-07 05:46:40.817576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.817867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.817877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.623 qpair failed and we were unable to recover it. 00:31:37.623 [2024-12-07 05:46:40.818214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.818395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.818405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.623 qpair failed and we were unable to recover it. 00:31:37.623 [2024-12-07 05:46:40.818618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.818953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.818962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.623 qpair failed and we were unable to recover it. 00:31:37.623 [2024-12-07 05:46:40.819256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.819572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.819581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.623 qpair failed and we were unable to recover it. 00:31:37.623 [2024-12-07 05:46:40.819944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.820248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.820258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.623 qpair failed and we were unable to recover it. 00:31:37.623 [2024-12-07 05:46:40.820536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.820722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.820731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.623 qpair failed and we were unable to recover it. 00:31:37.623 [2024-12-07 05:46:40.820874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.821171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.821181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.623 qpair failed and we were unable to recover it. 00:31:37.623 [2024-12-07 05:46:40.821499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.821819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.821829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.623 qpair failed and we were unable to recover it. 00:31:37.623 [2024-12-07 05:46:40.822169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.822505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.822514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.623 qpair failed and we were unable to recover it. 00:31:37.623 [2024-12-07 05:46:40.822705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.823063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.823072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.623 qpair failed and we were unable to recover it. 00:31:37.623 [2024-12-07 05:46:40.823377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.823659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.823669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.623 qpair failed and we were unable to recover it. 00:31:37.623 [2024-12-07 05:46:40.823975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.824361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.824371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.623 qpair failed and we were unable to recover it. 00:31:37.623 [2024-12-07 05:46:40.824669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.824892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.824901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.623 qpair failed and we were unable to recover it. 00:31:37.623 [2024-12-07 05:46:40.825260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.825620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.825631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.623 qpair failed and we were unable to recover it. 00:31:37.623 [2024-12-07 05:46:40.825943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.826239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.826248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.623 qpair failed and we were unable to recover it. 00:31:37.623 [2024-12-07 05:46:40.826435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.826761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.826771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.623 qpair failed and we were unable to recover it. 00:31:37.623 [2024-12-07 05:46:40.826965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.827285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.623 [2024-12-07 05:46:40.827295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.623 qpair failed and we were unable to recover it. 00:31:37.624 [2024-12-07 05:46:40.827615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.624 [2024-12-07 05:46:40.827936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.624 [2024-12-07 05:46:40.827946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.624 qpair failed and we were unable to recover it. 00:31:37.624 [2024-12-07 05:46:40.828230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.624 [2024-12-07 05:46:40.828519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.624 [2024-12-07 05:46:40.828529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.624 qpair failed and we were unable to recover it. 00:31:37.624 [2024-12-07 05:46:40.828839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.624 [2024-12-07 05:46:40.829119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.624 [2024-12-07 05:46:40.829128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.624 qpair failed and we were unable to recover it. 00:31:37.624 [2024-12-07 05:46:40.829330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.624 [2024-12-07 05:46:40.829682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.624 [2024-12-07 05:46:40.829691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.624 qpair failed and we were unable to recover it. 00:31:37.624 [2024-12-07 05:46:40.829988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.624 [2024-12-07 05:46:40.830326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.624 [2024-12-07 05:46:40.830336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.624 qpair failed and we were unable to recover it. 00:31:37.896 [2024-12-07 05:46:40.830641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.896 [2024-12-07 05:46:40.830934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.896 [2024-12-07 05:46:40.830943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.896 qpair failed and we were unable to recover it. 00:31:37.896 [2024-12-07 05:46:40.831136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.896 [2024-12-07 05:46:40.831494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.896 [2024-12-07 05:46:40.831503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.896 qpair failed and we were unable to recover it. 00:31:37.896 [2024-12-07 05:46:40.831698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.896 [2024-12-07 05:46:40.831931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.896 [2024-12-07 05:46:40.831940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.896 qpair failed and we were unable to recover it. 00:31:37.896 [2024-12-07 05:46:40.832129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.896 [2024-12-07 05:46:40.832323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.896 [2024-12-07 05:46:40.832333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.896 qpair failed and we were unable to recover it. 00:31:37.896 [2024-12-07 05:46:40.832601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.896 [2024-12-07 05:46:40.832887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.896 [2024-12-07 05:46:40.832897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.896 qpair failed and we were unable to recover it. 00:31:37.896 [2024-12-07 05:46:40.833260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.896 [2024-12-07 05:46:40.833611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.896 [2024-12-07 05:46:40.833621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.896 qpair failed and we were unable to recover it. 00:31:37.896 [2024-12-07 05:46:40.833967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.896 [2024-12-07 05:46:40.834293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.896 [2024-12-07 05:46:40.834303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.896 qpair failed and we were unable to recover it. 00:31:37.896 [2024-12-07 05:46:40.834621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.896 [2024-12-07 05:46:40.834670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.896 [2024-12-07 05:46:40.834680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.896 qpair failed and we were unable to recover it. 00:31:37.896 [2024-12-07 05:46:40.834988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.896 [2024-12-07 05:46:40.835285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.896 [2024-12-07 05:46:40.835295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.896 qpair failed and we were unable to recover it. 00:31:37.896 [2024-12-07 05:46:40.835626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.896 [2024-12-07 05:46:40.835932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.896 [2024-12-07 05:46:40.835942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.896 qpair failed and we were unable to recover it. 00:31:37.896 [2024-12-07 05:46:40.836282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.896 [2024-12-07 05:46:40.836591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.896 [2024-12-07 05:46:40.836601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.896 qpair failed and we were unable to recover it. 00:31:37.896 [2024-12-07 05:46:40.836907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.896 [2024-12-07 05:46:40.837092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.896 [2024-12-07 05:46:40.837102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.896 qpair failed and we were unable to recover it. 00:31:37.896 [2024-12-07 05:46:40.837357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.896 [2024-12-07 05:46:40.837679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.896 [2024-12-07 05:46:40.837689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.896 qpair failed and we were unable to recover it. 00:31:37.896 [2024-12-07 05:46:40.837907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.896 [2024-12-07 05:46:40.838093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.896 [2024-12-07 05:46:40.838102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.896 qpair failed and we were unable to recover it. 00:31:37.896 [2024-12-07 05:46:40.838434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.896 [2024-12-07 05:46:40.838719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.896 [2024-12-07 05:46:40.838729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.896 qpair failed and we were unable to recover it. 00:31:37.896 [2024-12-07 05:46:40.839050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.896 [2024-12-07 05:46:40.839357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.896 [2024-12-07 05:46:40.839367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.897 qpair failed and we were unable to recover it. 00:31:37.897 [2024-12-07 05:46:40.839693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.839984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.839993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.897 qpair failed and we were unable to recover it. 00:31:37.897 [2024-12-07 05:46:40.840304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.840600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.840609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.897 qpair failed and we were unable to recover it. 00:31:37.897 [2024-12-07 05:46:40.840947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.841138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.841148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.897 qpair failed and we were unable to recover it. 00:31:37.897 [2024-12-07 05:46:40.841314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.841411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.841420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.897 qpair failed and we were unable to recover it. 00:31:37.897 [2024-12-07 05:46:40.841715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.842034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.842045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.897 qpair failed and we were unable to recover it. 00:31:37.897 [2024-12-07 05:46:40.842247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.842575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.842586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.897 qpair failed and we were unable to recover it. 00:31:37.897 [2024-12-07 05:46:40.842820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.843104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.843113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.897 qpair failed and we were unable to recover it. 00:31:37.897 [2024-12-07 05:46:40.843433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.843775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.843785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.897 qpair failed and we were unable to recover it. 00:31:37.897 [2024-12-07 05:46:40.844117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.844316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.844326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.897 qpair failed and we were unable to recover it. 00:31:37.897 [2024-12-07 05:46:40.844645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.844928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.844937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.897 qpair failed and we were unable to recover it. 00:31:37.897 [2024-12-07 05:46:40.845233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.845562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.845571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.897 qpair failed and we were unable to recover it. 00:31:37.897 [2024-12-07 05:46:40.845875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.846173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.846183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.897 qpair failed and we were unable to recover it. 00:31:37.897 [2024-12-07 05:46:40.846343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.846679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.846688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.897 qpair failed and we were unable to recover it. 00:31:37.897 [2024-12-07 05:46:40.846974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.847297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.847307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.897 qpair failed and we were unable to recover it. 00:31:37.897 [2024-12-07 05:46:40.847591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.847826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.847835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.897 qpair failed and we were unable to recover it. 00:31:37.897 [2024-12-07 05:46:40.848156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.848469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.848479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.897 qpair failed and we were unable to recover it. 00:31:37.897 [2024-12-07 05:46:40.848795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.849112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.849122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.897 qpair failed and we were unable to recover it. 00:31:37.897 [2024-12-07 05:46:40.849448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.849762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.849772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.897 qpair failed and we were unable to recover it. 00:31:37.897 [2024-12-07 05:46:40.850082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.850274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.850283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.897 qpair failed and we were unable to recover it. 00:31:37.897 [2024-12-07 05:46:40.850634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.850910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.850919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.897 qpair failed and we were unable to recover it. 00:31:37.897 [2024-12-07 05:46:40.851090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.851387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.851396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.897 qpair failed and we were unable to recover it. 00:31:37.897 [2024-12-07 05:46:40.851737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.852036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.852047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.897 qpair failed and we were unable to recover it. 00:31:37.897 [2024-12-07 05:46:40.852355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.852549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.852558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.897 qpair failed and we were unable to recover it. 00:31:37.897 [2024-12-07 05:46:40.852887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.853164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.853173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.897 qpair failed and we were unable to recover it. 00:31:37.897 [2024-12-07 05:46:40.853507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.853719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.853728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.897 qpair failed and we were unable to recover it. 00:31:37.897 [2024-12-07 05:46:40.854092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.854195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.854204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.897 qpair failed and we were unable to recover it. 00:31:37.897 [2024-12-07 05:46:40.854453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.854614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.897 [2024-12-07 05:46:40.854624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.897 qpair failed and we were unable to recover it. 00:31:37.898 [2024-12-07 05:46:40.854940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.855096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.855106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.898 qpair failed and we were unable to recover it. 00:31:37.898 [2024-12-07 05:46:40.855380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.855578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.855587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.898 qpair failed and we were unable to recover it. 00:31:37.898 [2024-12-07 05:46:40.855943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.856251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.856262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.898 qpair failed and we were unable to recover it. 00:31:37.898 [2024-12-07 05:46:40.856659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.857016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.857026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.898 qpair failed and we were unable to recover it. 00:31:37.898 [2024-12-07 05:46:40.857308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.857624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.857634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.898 qpair failed and we were unable to recover it. 00:31:37.898 [2024-12-07 05:46:40.857823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.858068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.858078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.898 qpair failed and we were unable to recover it. 00:31:37.898 [2024-12-07 05:46:40.858343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.858626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.858643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.898 qpair failed and we were unable to recover it. 00:31:37.898 [2024-12-07 05:46:40.858873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.859202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.859213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.898 qpair failed and we were unable to recover it. 00:31:37.898 [2024-12-07 05:46:40.859484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.859729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.859738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.898 qpair failed and we were unable to recover it. 00:31:37.898 [2024-12-07 05:46:40.860052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.860365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.860374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.898 qpair failed and we were unable to recover it. 00:31:37.898 [2024-12-07 05:46:40.860679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.860996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.861006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.898 qpair failed and we were unable to recover it. 00:31:37.898 [2024-12-07 05:46:40.861227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.861564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.861574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.898 qpair failed and we were unable to recover it. 00:31:37.898 [2024-12-07 05:46:40.861765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.862078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.862088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.898 qpair failed and we were unable to recover it. 00:31:37.898 [2024-12-07 05:46:40.862410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.862727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.862737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.898 qpair failed and we were unable to recover it. 00:31:37.898 [2024-12-07 05:46:40.863043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.863315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.863326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.898 qpair failed and we were unable to recover it. 00:31:37.898 [2024-12-07 05:46:40.863630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.863873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.863882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.898 qpair failed and we were unable to recover it. 00:31:37.898 [2024-12-07 05:46:40.864190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.864581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.864591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.898 qpair failed and we were unable to recover it. 00:31:37.898 [2024-12-07 05:46:40.864895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.865076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.865087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.898 qpair failed and we were unable to recover it. 00:31:37.898 [2024-12-07 05:46:40.865409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.865606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.865616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.898 qpair failed and we were unable to recover it. 00:31:37.898 [2024-12-07 05:46:40.865917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.866237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.866249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.898 qpair failed and we were unable to recover it. 00:31:37.898 [2024-12-07 05:46:40.866529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.866851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.866860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.898 qpair failed and we were unable to recover it. 00:31:37.898 [2024-12-07 05:46:40.867045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.867230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.867239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.898 qpair failed and we were unable to recover it. 00:31:37.898 [2024-12-07 05:46:40.867446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.867708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.867718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.898 qpair failed and we were unable to recover it. 00:31:37.898 [2024-12-07 05:46:40.868047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.868342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.868351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.898 qpair failed and we were unable to recover it. 00:31:37.898 [2024-12-07 05:46:40.868639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.868977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.868988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.898 qpair failed and we were unable to recover it. 00:31:37.898 [2024-12-07 05:46:40.869297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.869592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.869603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.898 qpair failed and we were unable to recover it. 00:31:37.898 [2024-12-07 05:46:40.869925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.870253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.870263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.898 qpair failed and we were unable to recover it. 00:31:37.898 [2024-12-07 05:46:40.870569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.898 [2024-12-07 05:46:40.870762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.870771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.899 qpair failed and we were unable to recover it. 00:31:37.899 [2024-12-07 05:46:40.871055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.871309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.871318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.899 qpair failed and we were unable to recover it. 00:31:37.899 [2024-12-07 05:46:40.871546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.871745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.871756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.899 qpair failed and we were unable to recover it. 00:31:37.899 [2024-12-07 05:46:40.871952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.872245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.872255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.899 qpair failed and we were unable to recover it. 00:31:37.899 [2024-12-07 05:46:40.872556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.872859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.872868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.899 qpair failed and we were unable to recover it. 00:31:37.899 [2024-12-07 05:46:40.873033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.873297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.873307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.899 qpair failed and we were unable to recover it. 00:31:37.899 [2024-12-07 05:46:40.873613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.873954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.873963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.899 qpair failed and we were unable to recover it. 00:31:37.899 [2024-12-07 05:46:40.874159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.874458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.874468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.899 qpair failed and we were unable to recover it. 00:31:37.899 [2024-12-07 05:46:40.874850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.875163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.875173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.899 qpair failed and we were unable to recover it. 00:31:37.899 [2024-12-07 05:46:40.875380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.875740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.875749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.899 qpair failed and we were unable to recover it. 00:31:37.899 [2024-12-07 05:46:40.876055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.876387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.876397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.899 qpair failed and we were unable to recover it. 00:31:37.899 [2024-12-07 05:46:40.876567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.876906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.876916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.899 qpair failed and we were unable to recover it. 00:31:37.899 [2024-12-07 05:46:40.877217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.877529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.877539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.899 qpair failed and we were unable to recover it. 00:31:37.899 [2024-12-07 05:46:40.877774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.877966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.877976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.899 qpair failed and we were unable to recover it. 00:31:37.899 [2024-12-07 05:46:40.878283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.878461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.878471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.899 qpair failed and we were unable to recover it. 00:31:37.899 [2024-12-07 05:46:40.878791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.879125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.879135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.899 qpair failed and we were unable to recover it. 00:31:37.899 [2024-12-07 05:46:40.879440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.879758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.879767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.899 qpair failed and we were unable to recover it. 00:31:37.899 [2024-12-07 05:46:40.880079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.880409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.880419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.899 qpair failed and we were unable to recover it. 00:31:37.899 [2024-12-07 05:46:40.880620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.880954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.880964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.899 qpair failed and we were unable to recover it. 00:31:37.899 [2024-12-07 05:46:40.881204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.881533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.881542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.899 qpair failed and we were unable to recover it. 00:31:37.899 [2024-12-07 05:46:40.881833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.882148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.882159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.899 qpair failed and we were unable to recover it. 00:31:37.899 [2024-12-07 05:46:40.882476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.882777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.882787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.899 qpair failed and we were unable to recover it. 00:31:37.899 [2024-12-07 05:46:40.883078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.883291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.883301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.899 qpair failed and we were unable to recover it. 00:31:37.899 [2024-12-07 05:46:40.883629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.883931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.883941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.899 qpair failed and we were unable to recover it. 00:31:37.899 [2024-12-07 05:46:40.884246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.884565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.884574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.899 qpair failed and we were unable to recover it. 00:31:37.899 [2024-12-07 05:46:40.884846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.885114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.885124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.899 qpair failed and we were unable to recover it. 00:31:37.899 [2024-12-07 05:46:40.885423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.885702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.885711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.899 qpair failed and we were unable to recover it. 00:31:37.899 [2024-12-07 05:46:40.886020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.886208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.899 [2024-12-07 05:46:40.886218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.899 qpair failed and we were unable to recover it. 00:31:37.899 [2024-12-07 05:46:40.886502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.886830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.886840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.900 qpair failed and we were unable to recover it. 00:31:37.900 [2024-12-07 05:46:40.887046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.887329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.887338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.900 qpair failed and we were unable to recover it. 00:31:37.900 [2024-12-07 05:46:40.887632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.887849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.887858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.900 qpair failed and we were unable to recover it. 00:31:37.900 [2024-12-07 05:46:40.888178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.888504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.888513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.900 qpair failed and we were unable to recover it. 00:31:37.900 [2024-12-07 05:46:40.888733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.889074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.889083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.900 qpair failed and we were unable to recover it. 00:31:37.900 [2024-12-07 05:46:40.889377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.889702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.889711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.900 qpair failed and we were unable to recover it. 00:31:37.900 [2024-12-07 05:46:40.890020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.890204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.890213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.900 qpair failed and we were unable to recover it. 00:31:37.900 [2024-12-07 05:46:40.890515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.890795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.890804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.900 qpair failed and we were unable to recover it. 00:31:37.900 [2024-12-07 05:46:40.891089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.891414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.891423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.900 qpair failed and we were unable to recover it. 00:31:37.900 [2024-12-07 05:46:40.891743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.892056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.892065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.900 qpair failed and we were unable to recover it. 00:31:37.900 [2024-12-07 05:46:40.892381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.892653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.892663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.900 qpair failed and we were unable to recover it. 00:31:37.900 [2024-12-07 05:46:40.892796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.893160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.893170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.900 qpair failed and we were unable to recover it. 00:31:37.900 [2024-12-07 05:46:40.893461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.893776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.893786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.900 qpair failed and we were unable to recover it. 00:31:37.900 [2024-12-07 05:46:40.894090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.894394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.894403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.900 qpair failed and we were unable to recover it. 00:31:37.900 [2024-12-07 05:46:40.894671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.894866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.894875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.900 qpair failed and we were unable to recover it. 00:31:37.900 [2024-12-07 05:46:40.895184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.895484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.895495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.900 qpair failed and we were unable to recover it. 00:31:37.900 [2024-12-07 05:46:40.895795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.895868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.895878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.900 qpair failed and we were unable to recover it. 00:31:37.900 [2024-12-07 05:46:40.896188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.896506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.896516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.900 qpair failed and we were unable to recover it. 00:31:37.900 [2024-12-07 05:46:40.896847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.897093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.897103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.900 qpair failed and we were unable to recover it. 00:31:37.900 [2024-12-07 05:46:40.897178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.897428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.897437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.900 qpair failed and we were unable to recover it. 00:31:37.900 [2024-12-07 05:46:40.897738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.898044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.898054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.900 qpair failed and we were unable to recover it. 00:31:37.900 [2024-12-07 05:46:40.898360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.898521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.898530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.900 qpair failed and we were unable to recover it. 00:31:37.900 [2024-12-07 05:46:40.898810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.899128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.899138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.900 qpair failed and we were unable to recover it. 00:31:37.900 [2024-12-07 05:46:40.899465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.899794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.899804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.900 qpair failed and we were unable to recover it. 00:31:37.900 [2024-12-07 05:46:40.900133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.900451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.900461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.900 qpair failed and we were unable to recover it. 00:31:37.900 [2024-12-07 05:46:40.900801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.901085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.901094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.900 qpair failed and we were unable to recover it. 00:31:37.900 [2024-12-07 05:46:40.901417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.901611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.901620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.900 qpair failed and we were unable to recover it. 00:31:37.900 [2024-12-07 05:46:40.902024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.902361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.900 [2024-12-07 05:46:40.902370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.900 qpair failed and we were unable to recover it. 00:31:37.901 [2024-12-07 05:46:40.902545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.902823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.902833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.901 qpair failed and we were unable to recover it. 00:31:37.901 [2024-12-07 05:46:40.903042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.903311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.903320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.901 qpair failed and we were unable to recover it. 00:31:37.901 [2024-12-07 05:46:40.903641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.903956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.903965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.901 qpair failed and we were unable to recover it. 00:31:37.901 [2024-12-07 05:46:40.904304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.904513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.904522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.901 qpair failed and we were unable to recover it. 00:31:37.901 [2024-12-07 05:46:40.904811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.905122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.905132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.901 qpair failed and we were unable to recover it. 00:31:37.901 [2024-12-07 05:46:40.905445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.905781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.905791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.901 qpair failed and we were unable to recover it. 00:31:37.901 [2024-12-07 05:46:40.905976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.906323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.906333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.901 qpair failed and we were unable to recover it. 00:31:37.901 [2024-12-07 05:46:40.906644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.906932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.906943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.901 qpair failed and we were unable to recover it. 00:31:37.901 [2024-12-07 05:46:40.907238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.907540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.907550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.901 qpair failed and we were unable to recover it. 00:31:37.901 [2024-12-07 05:46:40.907903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.908183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.908193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.901 qpair failed and we were unable to recover it. 00:31:37.901 [2024-12-07 05:46:40.908486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.908779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.908788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.901 qpair failed and we were unable to recover it. 00:31:37.901 [2024-12-07 05:46:40.908974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.909231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.909241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.901 qpair failed and we were unable to recover it. 00:31:37.901 [2024-12-07 05:46:40.909544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.909920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.909929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.901 qpair failed and we were unable to recover it. 00:31:37.901 [2024-12-07 05:46:40.910210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.910523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.910533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.901 qpair failed and we were unable to recover it. 00:31:37.901 [2024-12-07 05:46:40.910848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.911114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.911123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.901 qpair failed and we were unable to recover it. 00:31:37.901 [2024-12-07 05:46:40.911434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.911633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.911642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.901 qpair failed and we were unable to recover it. 00:31:37.901 [2024-12-07 05:46:40.911852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.912169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.912180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.901 qpair failed and we were unable to recover it. 00:31:37.901 [2024-12-07 05:46:40.912352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.912705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.912715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.901 qpair failed and we were unable to recover it. 00:31:37.901 [2024-12-07 05:46:40.912909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.913223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.913233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.901 qpair failed and we were unable to recover it. 00:31:37.901 [2024-12-07 05:46:40.913536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.913831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.913840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.901 qpair failed and we were unable to recover it. 00:31:37.901 [2024-12-07 05:46:40.914124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.914455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.914465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.901 qpair failed and we were unable to recover it. 00:31:37.901 [2024-12-07 05:46:40.914645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.915009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.915024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.901 qpair failed and we were unable to recover it. 00:31:37.901 [2024-12-07 05:46:40.915328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.915621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.901 [2024-12-07 05:46:40.915631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.902 qpair failed and we were unable to recover it. 00:31:37.902 [2024-12-07 05:46:40.915914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.916182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.916192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.902 qpair failed and we were unable to recover it. 00:31:37.902 [2024-12-07 05:46:40.916406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.916724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.916734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.902 qpair failed and we were unable to recover it. 00:31:37.902 [2024-12-07 05:46:40.917042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.917368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.917378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.902 qpair failed and we were unable to recover it. 00:31:37.902 [2024-12-07 05:46:40.917588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.917848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.917858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.902 qpair failed and we were unable to recover it. 00:31:37.902 [2024-12-07 05:46:40.918184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.918513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.918523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.902 qpair failed and we were unable to recover it. 00:31:37.902 [2024-12-07 05:46:40.918859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.919201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.919211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.902 qpair failed and we were unable to recover it. 00:31:37.902 [2024-12-07 05:46:40.919541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.919849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.919859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.902 qpair failed and we were unable to recover it. 00:31:37.902 [2024-12-07 05:46:40.920024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.920337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.920347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.902 qpair failed and we were unable to recover it. 00:31:37.902 [2024-12-07 05:46:40.920667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.921029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.921039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.902 qpair failed and we were unable to recover it. 00:31:37.902 [2024-12-07 05:46:40.921343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.921651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.921661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.902 qpair failed and we were unable to recover it. 00:31:37.902 [2024-12-07 05:46:40.921871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.922062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.922073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.902 qpair failed and we were unable to recover it. 00:31:37.902 [2024-12-07 05:46:40.922352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.922548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.922558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.902 qpair failed and we were unable to recover it. 00:31:37.902 [2024-12-07 05:46:40.922860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.923186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.923196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.902 qpair failed and we were unable to recover it. 00:31:37.902 [2024-12-07 05:46:40.923538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.923842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.923851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.902 qpair failed and we were unable to recover it. 00:31:37.902 [2024-12-07 05:46:40.924169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.924492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.924501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.902 qpair failed and we were unable to recover it. 00:31:37.902 [2024-12-07 05:46:40.924681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.924995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.925007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.902 qpair failed and we were unable to recover it. 00:31:37.902 [2024-12-07 05:46:40.925332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.925502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.925512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.902 qpair failed and we were unable to recover it. 00:31:37.902 [2024-12-07 05:46:40.925723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.926000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.926013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.902 qpair failed and we were unable to recover it. 00:31:37.902 [2024-12-07 05:46:40.926320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.926630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.926640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.902 qpair failed and we were unable to recover it. 00:31:37.902 [2024-12-07 05:46:40.926946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.927129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.927141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.902 qpair failed and we were unable to recover it. 00:31:37.902 [2024-12-07 05:46:40.927450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.927636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.927646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.902 qpair failed and we were unable to recover it. 00:31:37.902 [2024-12-07 05:46:40.927963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.928291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.928301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.902 qpair failed and we were unable to recover it. 00:31:37.902 [2024-12-07 05:46:40.928602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.928937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.928947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.902 qpair failed and we were unable to recover it. 00:31:37.902 [2024-12-07 05:46:40.929068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.929166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.929175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.902 qpair failed and we were unable to recover it. 00:31:37.902 [2024-12-07 05:46:40.929473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.929784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.929794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.902 qpair failed and we were unable to recover it. 00:31:37.902 [2024-12-07 05:46:40.930046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.930357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.930367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.902 qpair failed and we were unable to recover it. 00:31:37.902 [2024-12-07 05:46:40.930672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.930980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.930989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.902 qpair failed and we were unable to recover it. 00:31:37.902 [2024-12-07 05:46:40.931287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.902 [2024-12-07 05:46:40.931608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.931617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.903 qpair failed and we were unable to recover it. 00:31:37.903 [2024-12-07 05:46:40.931806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.932110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.932120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.903 qpair failed and we were unable to recover it. 00:31:37.903 [2024-12-07 05:46:40.932321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.932638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.932648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.903 qpair failed and we were unable to recover it. 00:31:37.903 [2024-12-07 05:46:40.932955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.933248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.933258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.903 qpair failed and we were unable to recover it. 00:31:37.903 [2024-12-07 05:46:40.933572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.933860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.933869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.903 qpair failed and we were unable to recover it. 00:31:37.903 [2024-12-07 05:46:40.934189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.934519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.934528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.903 qpair failed and we were unable to recover it. 00:31:37.903 [2024-12-07 05:46:40.934802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.934982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.934991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.903 qpair failed and we were unable to recover it. 00:31:37.903 [2024-12-07 05:46:40.935160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.935453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.935462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.903 qpair failed and we were unable to recover it. 00:31:37.903 [2024-12-07 05:46:40.935769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.935938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.935948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.903 qpair failed and we were unable to recover it. 00:31:37.903 [2024-12-07 05:46:40.936239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.936570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.936579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.903 qpair failed and we were unable to recover it. 00:31:37.903 [2024-12-07 05:46:40.936870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.937192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.937202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.903 qpair failed and we were unable to recover it. 00:31:37.903 [2024-12-07 05:46:40.937505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.937815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.937825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.903 qpair failed and we were unable to recover it. 00:31:37.903 [2024-12-07 05:46:40.938131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.938430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.938439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.903 qpair failed and we were unable to recover it. 00:31:37.903 [2024-12-07 05:46:40.938741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.939050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.939059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.903 qpair failed and we were unable to recover it. 00:31:37.903 [2024-12-07 05:46:40.939210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.939595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.939604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.903 qpair failed and we were unable to recover it. 00:31:37.903 [2024-12-07 05:46:40.939883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.940163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.940173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.903 qpair failed and we were unable to recover it. 00:31:37.903 [2024-12-07 05:46:40.940475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.940814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.940823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.903 qpair failed and we were unable to recover it. 00:31:37.903 [2024-12-07 05:46:40.941108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.941324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.941334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.903 qpair failed and we were unable to recover it. 00:31:37.903 [2024-12-07 05:46:40.941666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.941859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.941869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.903 qpair failed and we were unable to recover it. 00:31:37.903 [2024-12-07 05:46:40.942163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.942484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.942494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.903 qpair failed and we were unable to recover it. 00:31:37.903 [2024-12-07 05:46:40.942798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.943198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.943209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.903 qpair failed and we were unable to recover it. 00:31:37.903 [2024-12-07 05:46:40.943511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.943629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.943638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.903 qpair failed and we were unable to recover it. 00:31:37.903 [2024-12-07 05:46:40.943933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.944266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.944276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.903 qpair failed and we were unable to recover it. 00:31:37.903 [2024-12-07 05:46:40.944581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.944892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.944902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.903 qpair failed and we were unable to recover it. 00:31:37.903 [2024-12-07 05:46:40.945242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.945562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.945571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.903 qpair failed and we were unable to recover it. 00:31:37.903 [2024-12-07 05:46:40.945852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.946059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.946069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.903 qpair failed and we were unable to recover it. 00:31:37.903 [2024-12-07 05:46:40.946383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.946691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.946701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.903 qpair failed and we were unable to recover it. 00:31:37.903 [2024-12-07 05:46:40.946864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.947144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.903 [2024-12-07 05:46:40.947153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.904 qpair failed and we were unable to recover it. 00:31:37.904 [2024-12-07 05:46:40.947460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.947731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.947740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.904 qpair failed and we were unable to recover it. 00:31:37.904 [2024-12-07 05:46:40.948105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.948312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.948323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.904 qpair failed and we were unable to recover it. 00:31:37.904 [2024-12-07 05:46:40.948621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.948939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.948948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.904 qpair failed and we were unable to recover it. 00:31:37.904 [2024-12-07 05:46:40.949220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.949525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.949534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.904 qpair failed and we were unable to recover it. 00:31:37.904 [2024-12-07 05:46:40.949927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.950209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.950219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.904 qpair failed and we were unable to recover it. 00:31:37.904 [2024-12-07 05:46:40.950531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.950845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.950855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.904 qpair failed and we were unable to recover it. 00:31:37.904 [2024-12-07 05:46:40.951167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.951396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.951405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.904 qpair failed and we were unable to recover it. 00:31:37.904 [2024-12-07 05:46:40.951713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.952047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.952057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.904 qpair failed and we were unable to recover it. 00:31:37.904 [2024-12-07 05:46:40.952362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.952741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.952752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.904 qpair failed and we were unable to recover it. 00:31:37.904 [2024-12-07 05:46:40.953035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.953195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.953207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.904 qpair failed and we were unable to recover it. 00:31:37.904 [2024-12-07 05:46:40.953499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.953793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.953802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.904 qpair failed and we were unable to recover it. 00:31:37.904 [2024-12-07 05:46:40.954125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.954446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.954455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.904 qpair failed and we were unable to recover it. 00:31:37.904 [2024-12-07 05:46:40.954734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.955022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.955032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.904 qpair failed and we were unable to recover it. 00:31:37.904 [2024-12-07 05:46:40.955415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.955711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.955720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.904 qpair failed and we were unable to recover it. 00:31:37.904 [2024-12-07 05:46:40.956016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.956328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.956338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.904 qpair failed and we were unable to recover it. 00:31:37.904 [2024-12-07 05:46:40.956647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.956960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.956970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.904 qpair failed and we were unable to recover it. 00:31:37.904 [2024-12-07 05:46:40.957275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.957587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.957597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.904 qpair failed and we were unable to recover it. 00:31:37.904 [2024-12-07 05:46:40.957901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.958205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.958215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.904 qpair failed and we were unable to recover it. 00:31:37.904 [2024-12-07 05:46:40.958583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.958871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.958880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.904 qpair failed and we were unable to recover it. 00:31:37.904 [2024-12-07 05:46:40.959157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.959454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.959463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.904 qpair failed and we were unable to recover it. 00:31:37.904 [2024-12-07 05:46:40.959750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.960078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.960088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.904 qpair failed and we were unable to recover it. 00:31:37.904 [2024-12-07 05:46:40.960267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.960650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.960659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.904 qpair failed and we were unable to recover it. 00:31:37.904 [2024-12-07 05:46:40.960826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.961141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.961151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.904 qpair failed and we were unable to recover it. 00:31:37.904 [2024-12-07 05:46:40.961520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.961812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.961822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.904 qpair failed and we were unable to recover it. 00:31:37.904 [2024-12-07 05:46:40.962124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.962439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.962449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.904 qpair failed and we were unable to recover it. 00:31:37.904 [2024-12-07 05:46:40.962754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.963069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.963080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.904 qpair failed and we were unable to recover it. 00:31:37.904 [2024-12-07 05:46:40.963394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.963571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.963581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.904 qpair failed and we were unable to recover it. 00:31:37.904 [2024-12-07 05:46:40.963944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.964234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.964243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.904 qpair failed and we were unable to recover it. 00:31:37.904 [2024-12-07 05:46:40.964533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.904 [2024-12-07 05:46:40.964825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.964835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.905 qpair failed and we were unable to recover it. 00:31:37.905 [2024-12-07 05:46:40.965141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.965395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.965405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.905 qpair failed and we were unable to recover it. 00:31:37.905 [2024-12-07 05:46:40.965692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.966006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.966025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.905 qpair failed and we were unable to recover it. 00:31:37.905 [2024-12-07 05:46:40.966328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.966621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.966631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.905 qpair failed and we were unable to recover it. 00:31:37.905 [2024-12-07 05:46:40.966819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.967154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.967164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.905 qpair failed and we were unable to recover it. 00:31:37.905 [2024-12-07 05:46:40.967470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.967767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.967778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.905 qpair failed and we were unable to recover it. 00:31:37.905 [2024-12-07 05:46:40.968064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.968376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.968385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.905 qpair failed and we were unable to recover it. 00:31:37.905 [2024-12-07 05:46:40.968696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.968987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.968996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.905 qpair failed and we were unable to recover it. 00:31:37.905 [2024-12-07 05:46:40.969329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.969669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.969678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.905 qpair failed and we were unable to recover it. 00:31:37.905 [2024-12-07 05:46:40.969952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.970272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.970282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.905 qpair failed and we were unable to recover it. 00:31:37.905 [2024-12-07 05:46:40.970565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.970879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.970889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.905 qpair failed and we were unable to recover it. 00:31:37.905 [2024-12-07 05:46:40.971180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.971502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.971511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.905 qpair failed and we were unable to recover it. 00:31:37.905 [2024-12-07 05:46:40.971818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.972151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.972161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.905 qpair failed and we were unable to recover it. 00:31:37.905 [2024-12-07 05:46:40.972527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.972826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.972835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.905 qpair failed and we were unable to recover it. 00:31:37.905 [2024-12-07 05:46:40.973118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.973433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.973443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.905 qpair failed and we were unable to recover it. 00:31:37.905 [2024-12-07 05:46:40.973735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.974058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.974076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.905 qpair failed and we were unable to recover it. 00:31:37.905 [2024-12-07 05:46:40.974399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.974712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.974722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.905 qpair failed and we were unable to recover it. 00:31:37.905 [2024-12-07 05:46:40.975027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.975349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.975358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.905 qpair failed and we were unable to recover it. 00:31:37.905 [2024-12-07 05:46:40.975656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.975981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.975990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.905 qpair failed and we were unable to recover it. 00:31:37.905 [2024-12-07 05:46:40.976288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.976579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.976588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.905 qpair failed and we were unable to recover it. 00:31:37.905 [2024-12-07 05:46:40.976891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.977081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.977090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.905 qpair failed and we were unable to recover it. 00:31:37.905 [2024-12-07 05:46:40.977414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.977570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.977581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.905 qpair failed and we were unable to recover it. 00:31:37.905 [2024-12-07 05:46:40.977870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.978187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.978197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.905 qpair failed and we were unable to recover it. 00:31:37.905 [2024-12-07 05:46:40.978455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.978789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.978798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.905 qpair failed and we were unable to recover it. 00:31:37.905 [2024-12-07 05:46:40.979048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.979244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.979257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.905 qpair failed and we were unable to recover it. 00:31:37.905 [2024-12-07 05:46:40.979578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.905 [2024-12-07 05:46:40.979910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.979919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.906 qpair failed and we were unable to recover it. 00:31:37.906 [2024-12-07 05:46:40.980131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.980440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.980449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.906 qpair failed and we were unable to recover it. 00:31:37.906 [2024-12-07 05:46:40.980775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.981083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.981093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.906 qpair failed and we were unable to recover it. 00:31:37.906 [2024-12-07 05:46:40.981366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.981588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.981597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.906 qpair failed and we were unable to recover it. 00:31:37.906 [2024-12-07 05:46:40.981910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.982107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.982118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.906 qpair failed and we were unable to recover it. 00:31:37.906 [2024-12-07 05:46:40.982315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.982551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.982561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.906 qpair failed and we were unable to recover it. 00:31:37.906 [2024-12-07 05:46:40.982861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.983171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.983181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.906 qpair failed and we were unable to recover it. 00:31:37.906 [2024-12-07 05:46:40.983364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.983633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.983642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.906 qpair failed and we were unable to recover it. 00:31:37.906 [2024-12-07 05:46:40.983969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.984243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.984253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.906 qpair failed and we were unable to recover it. 00:31:37.906 [2024-12-07 05:46:40.984551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.984845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.984855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.906 qpair failed and we were unable to recover it. 00:31:37.906 [2024-12-07 05:46:40.985160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.985433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.985450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.906 qpair failed and we were unable to recover it. 00:31:37.906 [2024-12-07 05:46:40.985750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.986032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.986043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.906 qpair failed and we were unable to recover it. 00:31:37.906 [2024-12-07 05:46:40.986370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.986682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.986692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.906 qpair failed and we were unable to recover it. 00:31:37.906 [2024-12-07 05:46:40.986850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.987149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.987159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.906 qpair failed and we were unable to recover it. 00:31:37.906 [2024-12-07 05:46:40.987483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.987798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.987808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.906 qpair failed and we were unable to recover it. 00:31:37.906 [2024-12-07 05:46:40.988114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.988295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.988305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.906 qpair failed and we were unable to recover it. 00:31:37.906 [2024-12-07 05:46:40.988611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.988916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.988925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.906 qpair failed and we were unable to recover it. 00:31:37.906 [2024-12-07 05:46:40.989260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.989583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.989592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.906 qpair failed and we were unable to recover it. 00:31:37.906 [2024-12-07 05:46:40.989777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.990026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.990036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.906 qpair failed and we were unable to recover it. 00:31:37.906 [2024-12-07 05:46:40.990335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.990639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.990649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.906 qpair failed and we were unable to recover it. 00:31:37.906 [2024-12-07 05:46:40.990971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.991335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.991344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.906 qpair failed and we were unable to recover it. 00:31:37.906 [2024-12-07 05:46:40.991628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.991932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.991941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.906 qpair failed and we were unable to recover it. 00:31:37.906 [2024-12-07 05:46:40.992112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.992493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.992502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.906 qpair failed and we were unable to recover it. 00:31:37.906 [2024-12-07 05:46:40.992693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.992869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.992878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.906 qpair failed and we were unable to recover it. 00:31:37.906 [2024-12-07 05:46:40.993095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.993311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.993320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.906 qpair failed and we were unable to recover it. 00:31:37.906 [2024-12-07 05:46:40.993616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.993948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.993958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.906 qpair failed and we were unable to recover it. 00:31:37.906 [2024-12-07 05:46:40.994325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.994642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.994651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.906 qpair failed and we were unable to recover it. 00:31:37.906 [2024-12-07 05:46:40.994963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.995142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.995152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.906 qpair failed and we were unable to recover it. 00:31:37.906 [2024-12-07 05:46:40.995476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.995756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.906 [2024-12-07 05:46:40.995765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.906 qpair failed and we were unable to recover it. 00:31:37.906 [2024-12-07 05:46:40.995987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:40.996341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:40.996351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.907 qpair failed and we were unable to recover it. 00:31:37.907 [2024-12-07 05:46:40.996554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:40.996886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:40.996895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.907 qpair failed and we were unable to recover it. 00:31:37.907 [2024-12-07 05:46:40.997063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:40.997428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:40.997437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.907 qpair failed and we were unable to recover it. 00:31:37.907 [2024-12-07 05:46:40.997656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:40.997969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:40.997977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.907 qpair failed and we were unable to recover it. 00:31:37.907 [2024-12-07 05:46:40.998327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:40.998671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:40.998680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.907 qpair failed and we were unable to recover it. 00:31:37.907 [2024-12-07 05:46:40.998986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:40.999310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:40.999319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.907 qpair failed and we were unable to recover it. 00:31:37.907 [2024-12-07 05:46:40.999609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:40.999893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:40.999902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.907 qpair failed and we were unable to recover it. 00:31:37.907 [2024-12-07 05:46:41.000180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.000477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.000487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.907 qpair failed and we were unable to recover it. 00:31:37.907 [2024-12-07 05:46:41.000673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.001049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.001059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.907 qpair failed and we were unable to recover it. 00:31:37.907 [2024-12-07 05:46:41.001242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.001613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.001622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.907 qpair failed and we were unable to recover it. 00:31:37.907 [2024-12-07 05:46:41.001922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.002220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.002229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.907 qpair failed and we were unable to recover it. 00:31:37.907 [2024-12-07 05:46:41.002541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.002870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.002879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.907 qpair failed and we were unable to recover it. 00:31:37.907 [2024-12-07 05:46:41.003163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.003438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.003447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.907 qpair failed and we were unable to recover it. 00:31:37.907 [2024-12-07 05:46:41.003750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.004050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.004060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.907 qpair failed and we were unable to recover it. 00:31:37.907 [2024-12-07 05:46:41.004260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.004608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.004617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.907 qpair failed and we were unable to recover it. 00:31:37.907 [2024-12-07 05:46:41.004924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.005197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.005207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.907 qpair failed and we were unable to recover it. 00:31:37.907 [2024-12-07 05:46:41.005511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.005791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.005800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.907 qpair failed and we were unable to recover it. 00:31:37.907 [2024-12-07 05:46:41.005960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.006232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.006243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.907 qpair failed and we were unable to recover it. 00:31:37.907 [2024-12-07 05:46:41.006601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.006811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.006820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.907 qpair failed and we were unable to recover it. 00:31:37.907 [2024-12-07 05:46:41.007132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.007432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.007441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.907 qpair failed and we were unable to recover it. 00:31:37.907 [2024-12-07 05:46:41.007743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.008060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.008070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.907 qpair failed and we were unable to recover it. 00:31:37.907 [2024-12-07 05:46:41.008350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.008699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.008711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.907 qpair failed and we were unable to recover it. 00:31:37.907 [2024-12-07 05:46:41.008993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.009299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.009309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.907 qpair failed and we were unable to recover it. 00:31:37.907 [2024-12-07 05:46:41.009626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.009954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.009963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.907 qpair failed and we were unable to recover it. 00:31:37.907 [2024-12-07 05:46:41.010267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.010574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.010583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.907 qpair failed and we were unable to recover it. 00:31:37.907 [2024-12-07 05:46:41.010741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.011017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.011026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.907 qpair failed and we were unable to recover it. 00:31:37.907 [2024-12-07 05:46:41.011318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.011514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.011523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.907 qpair failed and we were unable to recover it. 00:31:37.907 [2024-12-07 05:46:41.011871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.012179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.012190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.907 qpair failed and we were unable to recover it. 00:31:37.907 [2024-12-07 05:46:41.012479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.012691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.907 [2024-12-07 05:46:41.012700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.907 qpair failed and we were unable to recover it. 00:31:37.908 [2024-12-07 05:46:41.013043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.013345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.013355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.908 qpair failed and we were unable to recover it. 00:31:37.908 [2024-12-07 05:46:41.013677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.013983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.013992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.908 qpair failed and we were unable to recover it. 00:31:37.908 [2024-12-07 05:46:41.014295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.014650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.014659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.908 qpair failed and we were unable to recover it. 00:31:37.908 [2024-12-07 05:46:41.014950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.015250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.015260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.908 qpair failed and we were unable to recover it. 00:31:37.908 [2024-12-07 05:46:41.015460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.015779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.015788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.908 qpair failed and we were unable to recover it. 00:31:37.908 [2024-12-07 05:46:41.016094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.016399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.016408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.908 qpair failed and we were unable to recover it. 00:31:37.908 [2024-12-07 05:46:41.016716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.017051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.017061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.908 qpair failed and we were unable to recover it. 00:31:37.908 [2024-12-07 05:46:41.017382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.017672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.017683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.908 qpair failed and we were unable to recover it. 00:31:37.908 [2024-12-07 05:46:41.017954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.018164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.018174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.908 qpair failed and we were unable to recover it. 00:31:37.908 [2024-12-07 05:46:41.018544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.018823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.018832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.908 qpair failed and we were unable to recover it. 00:31:37.908 [2024-12-07 05:46:41.019035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.019318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.019327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.908 qpair failed and we were unable to recover it. 00:31:37.908 [2024-12-07 05:46:41.019525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.019825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.019834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.908 qpair failed and we were unable to recover it. 00:31:37.908 [2024-12-07 05:46:41.020139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.020364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.020373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.908 qpair failed and we were unable to recover it. 00:31:37.908 [2024-12-07 05:46:41.020685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.021017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.021027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.908 qpair failed and we were unable to recover it. 00:31:37.908 [2024-12-07 05:46:41.021219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.021548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.021557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.908 qpair failed and we were unable to recover it. 00:31:37.908 [2024-12-07 05:46:41.021927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.022182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.022192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.908 qpair failed and we were unable to recover it. 00:31:37.908 [2024-12-07 05:46:41.022513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.022842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.022851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.908 qpair failed and we were unable to recover it. 00:31:37.908 [2024-12-07 05:46:41.023035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.023362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.023372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.908 qpair failed and we were unable to recover it. 00:31:37.908 [2024-12-07 05:46:41.023586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.023888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.023898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.908 qpair failed and we were unable to recover it. 00:31:37.908 [2024-12-07 05:46:41.024216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.024526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.024536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.908 qpair failed and we were unable to recover it. 00:31:37.908 [2024-12-07 05:46:41.024841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.025137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.025146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.908 qpair failed and we were unable to recover it. 00:31:37.908 [2024-12-07 05:46:41.025437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.025739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.025748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.908 qpair failed and we were unable to recover it. 00:31:37.908 [2024-12-07 05:46:41.026054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.026407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.026416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.908 qpair failed and we were unable to recover it. 00:31:37.908 [2024-12-07 05:46:41.026713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.027026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.027035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.908 qpair failed and we were unable to recover it. 00:31:37.908 [2024-12-07 05:46:41.027247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.027533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.027543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.908 qpair failed and we were unable to recover it. 00:31:37.908 [2024-12-07 05:46:41.027856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.028174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.028185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.908 qpair failed and we were unable to recover it. 00:31:37.908 [2024-12-07 05:46:41.028486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.028799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.028809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.908 qpair failed and we were unable to recover it. 00:31:37.908 [2024-12-07 05:46:41.029140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.029337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.029346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.908 qpair failed and we were unable to recover it. 00:31:37.908 [2024-12-07 05:46:41.029658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.029962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.908 [2024-12-07 05:46:41.029972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.909 qpair failed and we were unable to recover it. 00:31:37.909 [2024-12-07 05:46:41.030209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.030575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.030585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.909 qpair failed and we were unable to recover it. 00:31:37.909 [2024-12-07 05:46:41.030883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.031184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.031194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.909 qpair failed and we were unable to recover it. 00:31:37.909 [2024-12-07 05:46:41.031482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.031797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.031807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.909 qpair failed and we were unable to recover it. 00:31:37.909 [2024-12-07 05:46:41.032121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.032427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.032437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.909 qpair failed and we were unable to recover it. 00:31:37.909 [2024-12-07 05:46:41.032818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.033117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.033130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.909 qpair failed and we were unable to recover it. 00:31:37.909 [2024-12-07 05:46:41.033332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.033605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.033614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.909 qpair failed and we were unable to recover it. 00:31:37.909 [2024-12-07 05:46:41.033891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.034069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.034079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.909 qpair failed and we were unable to recover it. 00:31:37.909 [2024-12-07 05:46:41.034284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.034581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.034590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.909 qpair failed and we were unable to recover it. 00:31:37.909 [2024-12-07 05:46:41.034803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.035092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.035102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.909 qpair failed and we were unable to recover it. 00:31:37.909 [2024-12-07 05:46:41.035410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.035718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.035727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.909 qpair failed and we were unable to recover it. 00:31:37.909 [2024-12-07 05:46:41.036017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.036350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.036359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.909 qpair failed and we were unable to recover it. 00:31:37.909 [2024-12-07 05:46:41.036520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.036784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.036794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.909 qpair failed and we were unable to recover it. 00:31:37.909 [2024-12-07 05:46:41.037120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.037411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.037420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.909 qpair failed and we were unable to recover it. 00:31:37.909 [2024-12-07 05:46:41.037579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.037761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.037771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.909 qpair failed and we were unable to recover it. 00:31:37.909 [2024-12-07 05:46:41.038086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.038382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.038392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.909 qpair failed and we were unable to recover it. 00:31:37.909 [2024-12-07 05:46:41.038705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.039019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.039030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.909 qpair failed and we were unable to recover it. 00:31:37.909 [2024-12-07 05:46:41.039303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.039614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.039624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.909 qpair failed and we were unable to recover it. 00:31:37.909 [2024-12-07 05:46:41.039816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.040108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.040117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.909 qpair failed and we were unable to recover it. 00:31:37.909 [2024-12-07 05:46:41.040404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.040726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.040736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.909 qpair failed and we were unable to recover it. 00:31:37.909 [2024-12-07 05:46:41.040921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.041269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.041279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.909 qpair failed and we were unable to recover it. 00:31:37.909 [2024-12-07 05:46:41.041561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.041849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.041858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.909 qpair failed and we were unable to recover it. 00:31:37.909 [2024-12-07 05:46:41.042164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.042461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.042470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.909 qpair failed and we were unable to recover it. 00:31:37.909 [2024-12-07 05:46:41.042748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.043077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.043087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.909 qpair failed and we were unable to recover it. 00:31:37.909 [2024-12-07 05:46:41.043477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.043781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.043790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.909 qpair failed and we were unable to recover it. 00:31:37.909 [2024-12-07 05:46:41.044101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.044393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.044402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.909 qpair failed and we were unable to recover it. 00:31:37.909 [2024-12-07 05:46:41.044686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.044988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.044997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.909 qpair failed and we were unable to recover it. 00:31:37.909 [2024-12-07 05:46:41.045366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.045657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.045666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.909 qpair failed and we were unable to recover it. 00:31:37.909 [2024-12-07 05:46:41.045948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.046242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.046252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.909 qpair failed and we were unable to recover it. 00:31:37.909 [2024-12-07 05:46:41.046553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.909 [2024-12-07 05:46:41.046758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.046767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.910 qpair failed and we were unable to recover it. 00:31:37.910 [2024-12-07 05:46:41.046975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.047351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.047361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.910 qpair failed and we were unable to recover it. 00:31:37.910 [2024-12-07 05:46:41.047648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.047748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.047757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.910 qpair failed and we were unable to recover it. 00:31:37.910 [2024-12-07 05:46:41.047926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.048137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.048147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.910 qpair failed and we were unable to recover it. 00:31:37.910 [2024-12-07 05:46:41.048441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.048622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.048632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.910 qpair failed and we were unable to recover it. 00:31:37.910 [2024-12-07 05:46:41.048905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.049229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.049239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.910 qpair failed and we were unable to recover it. 00:31:37.910 [2024-12-07 05:46:41.049528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.049845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.049854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.910 qpair failed and we were unable to recover it. 00:31:37.910 [2024-12-07 05:46:41.050184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.050531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.050541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.910 qpair failed and we were unable to recover it. 00:31:37.910 [2024-12-07 05:46:41.050856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.051159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.051169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.910 qpair failed and we were unable to recover it. 00:31:37.910 [2024-12-07 05:46:41.051470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.051752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.051762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.910 qpair failed and we were unable to recover it. 00:31:37.910 [2024-12-07 05:46:41.052035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.052375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.052384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.910 qpair failed and we were unable to recover it. 00:31:37.910 [2024-12-07 05:46:41.052683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.052962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.052972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.910 qpair failed and we were unable to recover it. 00:31:37.910 [2024-12-07 05:46:41.053281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.053572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.053581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.910 qpair failed and we were unable to recover it. 00:31:37.910 [2024-12-07 05:46:41.053873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.054177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.054187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.910 qpair failed and we were unable to recover it. 00:31:37.910 [2024-12-07 05:46:41.054394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.054462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.054471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.910 qpair failed and we were unable to recover it. 00:31:37.910 [2024-12-07 05:46:41.054791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.055076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.055086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.910 qpair failed and we were unable to recover it. 00:31:37.910 [2024-12-07 05:46:41.055285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.055661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.055670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.910 qpair failed and we were unable to recover it. 00:31:37.910 [2024-12-07 05:46:41.055974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.056305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.056315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.910 qpair failed and we were unable to recover it. 00:31:37.910 [2024-12-07 05:46:41.056522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.056852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.056862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.910 qpair failed and we were unable to recover it. 00:31:37.910 [2024-12-07 05:46:41.056997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.057291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.057301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.910 qpair failed and we were unable to recover it. 00:31:37.910 [2024-12-07 05:46:41.057612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.057942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.057951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.910 qpair failed and we were unable to recover it. 00:31:37.910 [2024-12-07 05:46:41.058143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.058519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.058528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.910 qpair failed and we were unable to recover it. 00:31:37.910 [2024-12-07 05:46:41.058818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.059138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.059148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.910 qpair failed and we were unable to recover it. 00:31:37.910 [2024-12-07 05:46:41.059458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.059621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.059631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.910 qpair failed and we were unable to recover it. 00:31:37.910 [2024-12-07 05:46:41.059953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.060168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.060178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.910 qpair failed and we were unable to recover it. 00:31:37.910 [2024-12-07 05:46:41.060500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.060695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.910 [2024-12-07 05:46:41.060704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.910 qpair failed and we were unable to recover it. 00:31:37.910 [2024-12-07 05:46:41.060919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.061213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.061223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.911 qpair failed and we were unable to recover it. 00:31:37.911 [2024-12-07 05:46:41.061536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.061744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.061755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.911 qpair failed and we were unable to recover it. 00:31:37.911 [2024-12-07 05:46:41.062101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.062415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.062425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.911 qpair failed and we were unable to recover it. 00:31:37.911 [2024-12-07 05:46:41.062770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.063083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.063093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.911 qpair failed and we were unable to recover it. 00:31:37.911 [2024-12-07 05:46:41.063410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.063734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.063744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.911 qpair failed and we were unable to recover it. 00:31:37.911 [2024-12-07 05:46:41.063928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.064214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.064224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.911 qpair failed and we were unable to recover it. 00:31:37.911 [2024-12-07 05:46:41.064504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.064793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.064803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.911 qpair failed and we were unable to recover it. 00:31:37.911 [2024-12-07 05:46:41.065089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.065360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.065369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.911 qpair failed and we were unable to recover it. 00:31:37.911 [2024-12-07 05:46:41.065535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.065808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.065818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.911 qpair failed and we were unable to recover it. 00:31:37.911 [2024-12-07 05:46:41.066143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.066455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.066465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.911 qpair failed and we were unable to recover it. 00:31:37.911 [2024-12-07 05:46:41.066762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.067024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.067033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.911 qpair failed and we were unable to recover it. 00:31:37.911 [2024-12-07 05:46:41.067361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.067648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.067658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.911 qpair failed and we were unable to recover it. 00:31:37.911 [2024-12-07 05:46:41.067965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.068254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.068263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.911 qpair failed and we were unable to recover it. 00:31:37.911 [2024-12-07 05:46:41.068437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.068773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.068782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.911 qpair failed and we were unable to recover it. 00:31:37.911 [2024-12-07 05:46:41.069005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.070063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.070075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.911 qpair failed and we were unable to recover it. 00:31:37.911 [2024-12-07 05:46:41.070365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.070656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.070666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.911 qpair failed and we were unable to recover it. 00:31:37.911 [2024-12-07 05:46:41.071001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.071294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.071304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.911 qpair failed and we were unable to recover it. 00:31:37.911 [2024-12-07 05:46:41.071477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.071781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.071791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.911 qpair failed and we were unable to recover it. 00:31:37.911 [2024-12-07 05:46:41.071963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.072159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.072169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.911 qpair failed and we were unable to recover it. 00:31:37.911 [2024-12-07 05:46:41.072352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.072670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.072680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.911 qpair failed and we were unable to recover it. 00:31:37.911 [2024-12-07 05:46:41.073052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.073349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.073358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.911 qpair failed and we were unable to recover it. 00:31:37.911 [2024-12-07 05:46:41.073653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.073973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.073982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.911 qpair failed and we were unable to recover it. 00:31:37.911 [2024-12-07 05:46:41.074267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.074542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.074551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.911 qpair failed and we were unable to recover it. 00:31:37.911 [2024-12-07 05:46:41.074870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.075054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.075064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.911 qpair failed and we were unable to recover it. 00:31:37.911 [2024-12-07 05:46:41.075359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.075691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.075701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.911 qpair failed and we were unable to recover it. 00:31:37.911 [2024-12-07 05:46:41.075988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.076298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.076308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.911 qpair failed and we were unable to recover it. 00:31:37.911 [2024-12-07 05:46:41.076663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.076937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.076947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.911 qpair failed and we were unable to recover it. 00:31:37.911 [2024-12-07 05:46:41.077151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.077497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.077506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.911 qpair failed and we were unable to recover it. 00:31:37.911 [2024-12-07 05:46:41.077792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.078092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.911 [2024-12-07 05:46:41.078102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.911 qpair failed and we were unable to recover it. 00:31:37.912 [2024-12-07 05:46:41.078389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.078679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.078688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.912 qpair failed and we were unable to recover it. 00:31:37.912 [2024-12-07 05:46:41.078871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.079050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.079060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.912 qpair failed and we were unable to recover it. 00:31:37.912 [2024-12-07 05:46:41.079365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.079655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.079665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.912 qpair failed and we were unable to recover it. 00:31:37.912 [2024-12-07 05:46:41.079953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.080155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.080165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.912 qpair failed and we were unable to recover it. 00:31:37.912 [2024-12-07 05:46:41.080498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.080785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.080795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.912 qpair failed and we were unable to recover it. 00:31:37.912 [2024-12-07 05:46:41.080899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.081079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.081089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.912 qpair failed and we were unable to recover it. 00:31:37.912 [2024-12-07 05:46:41.081375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.081669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.081678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.912 qpair failed and we were unable to recover it. 00:31:37.912 [2024-12-07 05:46:41.081972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.082223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.082233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.912 qpair failed and we were unable to recover it. 00:31:37.912 [2024-12-07 05:46:41.082543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.082846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.082855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.912 qpair failed and we were unable to recover it. 00:31:37.912 [2024-12-07 05:46:41.083141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.083438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.083447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.912 qpair failed and we were unable to recover it. 00:31:37.912 [2024-12-07 05:46:41.083754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.084078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.084087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.912 qpair failed and we were unable to recover it. 00:31:37.912 [2024-12-07 05:46:41.084453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.084745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.084755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.912 qpair failed and we were unable to recover it. 00:31:37.912 [2024-12-07 05:46:41.085063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.085401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.085410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.912 qpair failed and we were unable to recover it. 00:31:37.912 [2024-12-07 05:46:41.085703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.086009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.086024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.912 qpair failed and we were unable to recover it. 00:31:37.912 [2024-12-07 05:46:41.086317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.086618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.086627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.912 qpair failed and we were unable to recover it. 00:31:37.912 [2024-12-07 05:46:41.086786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.087056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.087066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.912 qpair failed and we were unable to recover it. 00:31:37.912 [2024-12-07 05:46:41.087383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.087590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.087599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.912 qpair failed and we were unable to recover it. 00:31:37.912 [2024-12-07 05:46:41.087826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.088151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.088160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.912 qpair failed and we were unable to recover it. 00:31:37.912 [2024-12-07 05:46:41.088447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.088776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.088793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.912 qpair failed and we were unable to recover it. 00:31:37.912 [2024-12-07 05:46:41.089147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.089452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.089462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.912 qpair failed and we were unable to recover it. 00:31:37.912 [2024-12-07 05:46:41.089723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.090041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.090051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.912 qpair failed and we were unable to recover it. 00:31:37.912 [2024-12-07 05:46:41.090449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.090734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.090743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.912 qpair failed and we were unable to recover it. 00:31:37.912 [2024-12-07 05:46:41.091069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.091408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.091418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.912 qpair failed and we were unable to recover it. 00:31:37.912 [2024-12-07 05:46:41.091777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.092095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.092108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.912 qpair failed and we were unable to recover it. 00:31:37.912 [2024-12-07 05:46:41.092408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.092707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.092717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.912 qpair failed and we were unable to recover it. 00:31:37.912 [2024-12-07 05:46:41.093030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.093309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.093318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.912 qpair failed and we were unable to recover it. 00:31:37.912 [2024-12-07 05:46:41.093546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.093804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.093815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.912 qpair failed and we were unable to recover it. 00:31:37.912 [2024-12-07 05:46:41.094127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.094316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.094326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.912 qpair failed and we were unable to recover it. 00:31:37.912 [2024-12-07 05:46:41.094595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.094923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.912 [2024-12-07 05:46:41.094933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.913 qpair failed and we were unable to recover it. 00:31:37.913 [2024-12-07 05:46:41.095228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.095536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.095545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.913 qpair failed and we were unable to recover it. 00:31:37.913 [2024-12-07 05:46:41.095902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.096195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.096204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.913 qpair failed and we were unable to recover it. 00:31:37.913 [2024-12-07 05:46:41.096572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.096848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.096858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.913 qpair failed and we were unable to recover it. 00:31:37.913 [2024-12-07 05:46:41.097129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.097466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.097476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.913 qpair failed and we were unable to recover it. 00:31:37.913 [2024-12-07 05:46:41.097784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.098095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.098105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.913 qpair failed and we were unable to recover it. 00:31:37.913 [2024-12-07 05:46:41.098437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.098738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.098747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.913 qpair failed and we were unable to recover it. 00:31:37.913 [2024-12-07 05:46:41.099032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.099249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.099258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.913 qpair failed and we were unable to recover it. 00:31:37.913 [2024-12-07 05:46:41.099579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.099869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.099878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.913 qpair failed and we were unable to recover it. 00:31:37.913 [2024-12-07 05:46:41.100186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.100386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.100395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.913 qpair failed and we were unable to recover it. 00:31:37.913 [2024-12-07 05:46:41.100707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.100986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.100995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.913 qpair failed and we were unable to recover it. 00:31:37.913 [2024-12-07 05:46:41.101182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.101377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.101386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.913 qpair failed and we were unable to recover it. 00:31:37.913 [2024-12-07 05:46:41.101650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.101965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.101975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.913 qpair failed and we were unable to recover it. 00:31:37.913 [2024-12-07 05:46:41.102279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.102556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.102565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.913 qpair failed and we were unable to recover it. 00:31:37.913 [2024-12-07 05:46:41.102879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.103279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.103289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.913 qpair failed and we were unable to recover it. 00:31:37.913 [2024-12-07 05:46:41.103571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.103891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.103900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.913 qpair failed and we were unable to recover it. 00:31:37.913 [2024-12-07 05:46:41.104183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.104503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.104512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.913 qpair failed and we were unable to recover it. 00:31:37.913 [2024-12-07 05:46:41.104821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.105120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.105130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.913 qpair failed and we were unable to recover it. 00:31:37.913 [2024-12-07 05:46:41.105421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.105713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.105722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.913 qpair failed and we were unable to recover it. 00:31:37.913 [2024-12-07 05:46:41.106004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.106294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.106304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.913 qpair failed and we were unable to recover it. 00:31:37.913 [2024-12-07 05:46:41.106591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.106917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.106927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.913 qpair failed and we were unable to recover it. 00:31:37.913 [2024-12-07 05:46:41.107256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.107565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.107575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.913 qpair failed and we were unable to recover it. 00:31:37.913 [2024-12-07 05:46:41.107881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.108192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.108201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.913 qpair failed and we were unable to recover it. 00:31:37.913 [2024-12-07 05:46:41.108484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.108805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.108814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.913 qpair failed and we were unable to recover it. 00:31:37.913 [2024-12-07 05:46:41.109120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.109402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.109411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.913 qpair failed and we were unable to recover it. 00:31:37.913 [2024-12-07 05:46:41.109725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.109917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.109926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.913 qpair failed and we were unable to recover it. 00:31:37.913 [2024-12-07 05:46:41.110250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.110425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.110436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.913 qpair failed and we were unable to recover it. 00:31:37.913 [2024-12-07 05:46:41.110732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.110943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.110952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.913 qpair failed and we were unable to recover it. 00:31:37.913 [2024-12-07 05:46:41.111289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.111611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.111621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.913 qpair failed and we were unable to recover it. 00:31:37.913 [2024-12-07 05:46:41.111946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.913 [2024-12-07 05:46:41.112234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.112244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.914 qpair failed and we were unable to recover it. 00:31:37.914 [2024-12-07 05:46:41.112550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.112865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.112875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.914 qpair failed and we were unable to recover it. 00:31:37.914 [2024-12-07 05:46:41.113179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.113485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.113495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.914 qpair failed and we were unable to recover it. 00:31:37.914 [2024-12-07 05:46:41.113780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.114111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.114122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.914 qpair failed and we were unable to recover it. 00:31:37.914 [2024-12-07 05:46:41.114480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.114763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.114773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.914 qpair failed and we were unable to recover it. 00:31:37.914 [2024-12-07 05:46:41.114945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.115149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.115159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.914 qpair failed and we were unable to recover it. 00:31:37.914 [2024-12-07 05:46:41.115453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.115777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.115787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.914 qpair failed and we were unable to recover it. 00:31:37.914 [2024-12-07 05:46:41.116117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.116422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.116434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.914 qpair failed and we were unable to recover it. 00:31:37.914 [2024-12-07 05:46:41.116560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.116831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.116840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.914 qpair failed and we were unable to recover it. 00:31:37.914 [2024-12-07 05:46:41.117159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.117466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.117475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.914 qpair failed and we were unable to recover it. 00:31:37.914 [2024-12-07 05:46:41.117774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.118074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.118084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.914 qpair failed and we were unable to recover it. 00:31:37.914 [2024-12-07 05:46:41.118422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.118731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.118740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.914 qpair failed and we were unable to recover it. 00:31:37.914 [2024-12-07 05:46:41.119036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.119309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.119318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.914 qpair failed and we were unable to recover it. 00:31:37.914 [2024-12-07 05:46:41.119585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.119919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.119928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.914 qpair failed and we were unable to recover it. 00:31:37.914 [2024-12-07 05:46:41.120226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.120529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.120538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.914 qpair failed and we were unable to recover it. 00:31:37.914 [2024-12-07 05:46:41.120844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.121132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.121142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.914 qpair failed and we were unable to recover it. 00:31:37.914 [2024-12-07 05:46:41.121535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.121804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.121814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.914 qpair failed and we were unable to recover it. 00:31:37.914 [2024-12-07 05:46:41.122138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.122427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.122436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.914 qpair failed and we were unable to recover it. 00:31:37.914 [2024-12-07 05:46:41.122768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.123084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.123094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.914 qpair failed and we were unable to recover it. 00:31:37.914 [2024-12-07 05:46:41.123396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.123678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.914 [2024-12-07 05:46:41.123688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:37.914 qpair failed and we were unable to recover it. 00:31:37.914 [2024-12-07 05:46:41.123995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.124274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.124285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.185 qpair failed and we were unable to recover it. 00:31:38.185 [2024-12-07 05:46:41.124582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.124949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.124959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.185 qpair failed and we were unable to recover it. 00:31:38.185 [2024-12-07 05:46:41.125264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.125590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.125600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.185 qpair failed and we were unable to recover it. 00:31:38.185 [2024-12-07 05:46:41.125916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.126221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.126230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.185 qpair failed and we were unable to recover it. 00:31:38.185 [2024-12-07 05:46:41.126514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.126831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.126840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.185 qpair failed and we were unable to recover it. 00:31:38.185 [2024-12-07 05:46:41.127133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.127464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.127473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.185 qpair failed and we were unable to recover it. 00:31:38.185 [2024-12-07 05:46:41.127765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.128041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.128051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.185 qpair failed and we were unable to recover it. 00:31:38.185 [2024-12-07 05:46:41.128367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.128665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.128675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.185 qpair failed and we were unable to recover it. 00:31:38.185 [2024-12-07 05:46:41.128981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.129255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.129265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.185 qpair failed and we were unable to recover it. 00:31:38.185 [2024-12-07 05:46:41.129582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.129899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.129909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.185 qpair failed and we were unable to recover it. 00:31:38.185 [2024-12-07 05:46:41.130205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.130518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.130528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.185 qpair failed and we were unable to recover it. 00:31:38.185 [2024-12-07 05:46:41.130821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.131157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.131166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.185 qpair failed and we were unable to recover it. 00:31:38.185 [2024-12-07 05:46:41.131452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.131768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.131777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.185 qpair failed and we were unable to recover it. 00:31:38.185 [2024-12-07 05:46:41.132073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.132383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.132392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.185 qpair failed and we were unable to recover it. 00:31:38.185 [2024-12-07 05:46:41.132674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.132965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.132974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.185 qpair failed and we were unable to recover it. 00:31:38.185 [2024-12-07 05:46:41.133219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.133283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.133293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.185 qpair failed and we were unable to recover it. 00:31:38.185 [2024-12-07 05:46:41.133591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.133914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.133924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.185 qpair failed and we were unable to recover it. 00:31:38.185 [2024-12-07 05:46:41.134226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.134551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.185 [2024-12-07 05:46:41.134561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.186 qpair failed and we were unable to recover it. 00:31:38.186 [2024-12-07 05:46:41.134910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.135193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.135204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.186 qpair failed and we were unable to recover it. 00:31:38.186 [2024-12-07 05:46:41.135494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.135700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.135709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.186 qpair failed and we were unable to recover it. 00:31:38.186 [2024-12-07 05:46:41.136029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.136326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.136335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.186 qpair failed and we were unable to recover it. 00:31:38.186 [2024-12-07 05:46:41.136506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.136799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.136809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.186 qpair failed and we were unable to recover it. 00:31:38.186 [2024-12-07 05:46:41.137115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.137321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.137330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.186 qpair failed and we were unable to recover it. 00:31:38.186 [2024-12-07 05:46:41.137685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.138000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.138009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.186 qpair failed and we were unable to recover it. 00:31:38.186 [2024-12-07 05:46:41.138344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.138660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.138671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.186 qpair failed and we were unable to recover it. 00:31:38.186 [2024-12-07 05:46:41.138976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.139277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.139288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.186 qpair failed and we were unable to recover it. 00:31:38.186 [2024-12-07 05:46:41.139603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.139915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.139925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.186 qpair failed and we were unable to recover it. 00:31:38.186 [2024-12-07 05:46:41.140228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.140541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.140551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.186 qpair failed and we were unable to recover it. 00:31:38.186 [2024-12-07 05:46:41.140856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.141066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.141076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.186 qpair failed and we were unable to recover it. 00:31:38.186 [2024-12-07 05:46:41.141424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.141727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.141737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.186 qpair failed and we were unable to recover it. 00:31:38.186 [2024-12-07 05:46:41.142067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.142369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.142379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.186 qpair failed and we were unable to recover it. 00:31:38.186 [2024-12-07 05:46:41.142687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.143037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.143048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.186 qpair failed and we were unable to recover it. 00:31:38.186 [2024-12-07 05:46:41.143341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.143522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.143532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.186 qpair failed and we were unable to recover it. 00:31:38.186 [2024-12-07 05:46:41.143853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.144174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.144185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.186 qpair failed and we were unable to recover it. 00:31:38.186 [2024-12-07 05:46:41.144464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.144766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.144775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.186 qpair failed and we were unable to recover it. 00:31:38.186 [2024-12-07 05:46:41.145065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.145238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.145255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.186 qpair failed and we were unable to recover it. 00:31:38.186 [2024-12-07 05:46:41.145570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.145880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.145890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.186 qpair failed and we were unable to recover it. 00:31:38.186 [2024-12-07 05:46:41.146223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.146511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.146520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.186 qpair failed and we were unable to recover it. 00:31:38.186 [2024-12-07 05:46:41.146805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.147035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.147048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.186 qpair failed and we were unable to recover it. 00:31:38.186 [2024-12-07 05:46:41.147367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.147661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.147671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.186 qpair failed and we were unable to recover it. 00:31:38.186 [2024-12-07 05:46:41.147972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.148285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.148294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.186 qpair failed and we were unable to recover it. 00:31:38.186 [2024-12-07 05:46:41.148495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.148762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.148771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.186 qpair failed and we were unable to recover it. 00:31:38.186 [2024-12-07 05:46:41.149053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.149359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.149368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.186 qpair failed and we were unable to recover it. 00:31:38.186 [2024-12-07 05:46:41.149561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.149855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.149865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.186 qpair failed and we were unable to recover it. 00:31:38.186 [2024-12-07 05:46:41.150177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.150475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.150484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.186 qpair failed and we were unable to recover it. 00:31:38.186 [2024-12-07 05:46:41.150883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.151191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.186 [2024-12-07 05:46:41.151201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.186 qpair failed and we were unable to recover it. 00:31:38.186 [2024-12-07 05:46:41.151485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.151777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.151786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.187 qpair failed and we were unable to recover it. 00:31:38.187 [2024-12-07 05:46:41.152087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.152396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.152405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.187 qpair failed and we were unable to recover it. 00:31:38.187 [2024-12-07 05:46:41.152709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.153032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.153041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.187 qpair failed and we were unable to recover it. 00:31:38.187 [2024-12-07 05:46:41.153349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.153672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.153682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.187 qpair failed and we were unable to recover it. 00:31:38.187 [2024-12-07 05:46:41.153962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.154296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.154306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.187 qpair failed and we were unable to recover it. 00:31:38.187 [2024-12-07 05:46:41.154597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.154914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.154923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.187 qpair failed and we were unable to recover it. 00:31:38.187 [2024-12-07 05:46:41.155228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.155518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.155527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.187 qpair failed and we were unable to recover it. 00:31:38.187 [2024-12-07 05:46:41.155805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.156115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.156126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.187 qpair failed and we were unable to recover it. 00:31:38.187 [2024-12-07 05:46:41.156408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.156718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.156728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.187 qpair failed and we were unable to recover it. 00:31:38.187 [2024-12-07 05:46:41.157037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.157347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.157357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.187 qpair failed and we were unable to recover it. 00:31:38.187 [2024-12-07 05:46:41.157659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.157938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.157953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.187 qpair failed and we were unable to recover it. 00:31:38.187 [2024-12-07 05:46:41.158136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.158466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.158475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.187 qpair failed and we were unable to recover it. 00:31:38.187 [2024-12-07 05:46:41.158767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.159087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.159097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.187 qpair failed and we were unable to recover it. 00:31:38.187 [2024-12-07 05:46:41.159413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.159722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.159732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.187 qpair failed and we were unable to recover it. 00:31:38.187 [2024-12-07 05:46:41.160035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.160349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.160359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.187 qpair failed and we were unable to recover it. 00:31:38.187 [2024-12-07 05:46:41.160644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.160924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.160934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.187 qpair failed and we were unable to recover it. 00:31:38.187 [2024-12-07 05:46:41.161316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.161515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.161524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.187 qpair failed and we were unable to recover it. 00:31:38.187 [2024-12-07 05:46:41.161853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.162144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.162154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.187 qpair failed and we were unable to recover it. 00:31:38.187 [2024-12-07 05:46:41.162458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.162753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.162762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.187 qpair failed and we were unable to recover it. 00:31:38.187 [2024-12-07 05:46:41.163070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.163382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.163393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.187 qpair failed and we were unable to recover it. 00:31:38.187 [2024-12-07 05:46:41.163730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.164014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.164023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.187 qpair failed and we were unable to recover it. 00:31:38.187 [2024-12-07 05:46:41.164333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.164663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.164673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.187 qpair failed and we were unable to recover it. 00:31:38.187 [2024-12-07 05:46:41.164984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.165264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.165273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.187 qpair failed and we were unable to recover it. 00:31:38.187 [2024-12-07 05:46:41.165577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.165875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.165885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.187 qpair failed and we were unable to recover it. 00:31:38.187 [2024-12-07 05:46:41.166160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.166446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.166455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.187 qpair failed and we were unable to recover it. 00:31:38.187 [2024-12-07 05:46:41.166655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.166965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.166974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.187 qpair failed and we were unable to recover it. 00:31:38.187 [2024-12-07 05:46:41.167271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.167541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.167550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.187 qpair failed and we were unable to recover it. 00:31:38.187 [2024-12-07 05:46:41.167852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.168183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.168193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.187 qpair failed and we were unable to recover it. 00:31:38.187 [2024-12-07 05:46:41.168474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.168642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.187 [2024-12-07 05:46:41.168652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.187 qpair failed and we were unable to recover it. 00:31:38.187 [2024-12-07 05:46:41.168972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.169288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.169298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.188 qpair failed and we were unable to recover it. 00:31:38.188 [2024-12-07 05:46:41.169497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.169846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.169855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.188 qpair failed and we were unable to recover it. 00:31:38.188 [2024-12-07 05:46:41.170121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.170426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.170435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.188 qpair failed and we were unable to recover it. 00:31:38.188 [2024-12-07 05:46:41.170721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.170880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.170890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.188 qpair failed and we were unable to recover it. 00:31:38.188 [2024-12-07 05:46:41.171186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.171506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.171516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.188 qpair failed and we were unable to recover it. 00:31:38.188 [2024-12-07 05:46:41.171843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.172053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.172063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.188 qpair failed and we were unable to recover it. 00:31:38.188 [2024-12-07 05:46:41.172372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.172564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.172573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.188 qpair failed and we were unable to recover it. 00:31:38.188 [2024-12-07 05:46:41.172858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.173179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.173190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.188 qpair failed and we were unable to recover it. 00:31:38.188 [2024-12-07 05:46:41.173497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.173786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.173795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.188 qpair failed and we were unable to recover it. 00:31:38.188 [2024-12-07 05:46:41.174096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.174418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.174427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.188 qpair failed and we were unable to recover it. 00:31:38.188 [2024-12-07 05:46:41.174711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.175021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.175030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.188 qpair failed and we were unable to recover it. 00:31:38.188 [2024-12-07 05:46:41.175188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.175452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.175462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.188 qpair failed and we were unable to recover it. 00:31:38.188 [2024-12-07 05:46:41.175775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.176065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.176075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.188 qpair failed and we were unable to recover it. 00:31:38.188 [2024-12-07 05:46:41.176392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.176708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.176717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.188 qpair failed and we were unable to recover it. 00:31:38.188 [2024-12-07 05:46:41.177119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.177306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.177317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.188 qpair failed and we were unable to recover it. 00:31:38.188 [2024-12-07 05:46:41.177619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.177943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.177953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.188 qpair failed and we were unable to recover it. 00:31:38.188 [2024-12-07 05:46:41.178258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.178585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.178595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.188 qpair failed and we were unable to recover it. 00:31:38.188 [2024-12-07 05:46:41.178902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.179215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.179225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.188 qpair failed and we were unable to recover it. 00:31:38.188 [2024-12-07 05:46:41.179526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.179836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.179845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.188 qpair failed and we were unable to recover it. 00:31:38.188 [2024-12-07 05:46:41.180053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.180340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.180350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.188 qpair failed and we were unable to recover it. 00:31:38.188 [2024-12-07 05:46:41.180674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.180984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.180993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.188 qpair failed and we were unable to recover it. 00:31:38.188 [2024-12-07 05:46:41.181168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.181526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.181535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.188 qpair failed and we were unable to recover it. 00:31:38.188 [2024-12-07 05:46:41.181824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.182158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.182167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.188 qpair failed and we were unable to recover it. 00:31:38.188 [2024-12-07 05:46:41.182455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.182612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.182621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.188 qpair failed and we were unable to recover it. 00:31:38.188 [2024-12-07 05:46:41.182898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.183240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.183250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.188 qpair failed and we were unable to recover it. 00:31:38.188 [2024-12-07 05:46:41.183533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.183823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.183832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.188 qpair failed and we were unable to recover it. 00:31:38.188 [2024-12-07 05:46:41.184108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.184427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.184436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.188 qpair failed and we were unable to recover it. 00:31:38.188 [2024-12-07 05:46:41.184721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.185032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.185047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.188 qpair failed and we were unable to recover it. 00:31:38.188 [2024-12-07 05:46:41.185209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.185589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.188 [2024-12-07 05:46:41.185598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.188 qpair failed and we were unable to recover it. 00:31:38.189 [2024-12-07 05:46:41.185881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.186172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.186181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.189 qpair failed and we were unable to recover it. 00:31:38.189 [2024-12-07 05:46:41.186486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.186792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.186802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.189 qpair failed and we were unable to recover it. 00:31:38.189 [2024-12-07 05:46:41.187079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.187386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.187395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.189 qpair failed and we were unable to recover it. 00:31:38.189 [2024-12-07 05:46:41.187778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.187931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.187941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.189 qpair failed and we were unable to recover it. 00:31:38.189 [2024-12-07 05:46:41.188231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.188547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.188556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.189 qpair failed and we were unable to recover it. 00:31:38.189 [2024-12-07 05:46:41.188849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.189051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.189061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.189 qpair failed and we were unable to recover it. 00:31:38.189 [2024-12-07 05:46:41.189366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.189709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.189718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.189 qpair failed and we were unable to recover it. 00:31:38.189 [2024-12-07 05:46:41.189892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.190161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.190171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.189 qpair failed and we were unable to recover it. 00:31:38.189 [2024-12-07 05:46:41.190505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.190814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.190824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.189 qpair failed and we were unable to recover it. 00:31:38.189 [2024-12-07 05:46:41.190982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.191268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.191278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.189 qpair failed and we were unable to recover it. 00:31:38.189 [2024-12-07 05:46:41.191558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.191878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.191887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.189 qpair failed and we were unable to recover it. 00:31:38.189 [2024-12-07 05:46:41.192185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.192484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.192493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.189 qpair failed and we were unable to recover it. 00:31:38.189 [2024-12-07 05:46:41.192774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.193107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.193116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.189 qpair failed and we were unable to recover it. 00:31:38.189 [2024-12-07 05:46:41.193291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.193607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.193616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.189 qpair failed and we were unable to recover it. 00:31:38.189 [2024-12-07 05:46:41.193804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.194168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.194178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.189 qpair failed and we were unable to recover it. 00:31:38.189 [2024-12-07 05:46:41.194477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.194667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.194677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.189 qpair failed and we were unable to recover it. 00:31:38.189 [2024-12-07 05:46:41.194981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.195271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.195280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.189 qpair failed and we were unable to recover it. 00:31:38.189 [2024-12-07 05:46:41.195589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.195923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.195933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.189 qpair failed and we were unable to recover it. 00:31:38.189 [2024-12-07 05:46:41.196243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.196561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.196571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.189 qpair failed and we were unable to recover it. 00:31:38.189 [2024-12-07 05:46:41.196872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.197183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.197193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.189 qpair failed and we were unable to recover it. 00:31:38.189 [2024-12-07 05:46:41.197472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.197788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.197797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.189 qpair failed and we were unable to recover it. 00:31:38.189 [2024-12-07 05:46:41.198078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.198381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.198391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.189 qpair failed and we were unable to recover it. 00:31:38.189 [2024-12-07 05:46:41.198677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.198996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.199005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.189 qpair failed and we were unable to recover it. 00:31:38.189 [2024-12-07 05:46:41.199292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.199604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.199614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.189 qpair failed and we were unable to recover it. 00:31:38.189 [2024-12-07 05:46:41.199919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.200229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.189 [2024-12-07 05:46:41.200240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.189 qpair failed and we were unable to recover it. 00:31:38.189 [2024-12-07 05:46:41.200544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.200863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.200873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.190 qpair failed and we were unable to recover it. 00:31:38.190 [2024-12-07 05:46:41.201173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.201487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.201499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.190 qpair failed and we were unable to recover it. 00:31:38.190 [2024-12-07 05:46:41.201806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.202129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.202140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.190 qpair failed and we were unable to recover it. 00:31:38.190 [2024-12-07 05:46:41.202460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.202771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.202781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.190 qpair failed and we were unable to recover it. 00:31:38.190 [2024-12-07 05:46:41.203097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.203404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.203418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.190 qpair failed and we were unable to recover it. 00:31:38.190 [2024-12-07 05:46:41.203746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.204061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.204071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.190 qpair failed and we were unable to recover it. 00:31:38.190 [2024-12-07 05:46:41.204377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.204694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.204704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.190 qpair failed and we were unable to recover it. 00:31:38.190 [2024-12-07 05:46:41.205004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.205372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.205382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.190 qpair failed and we were unable to recover it. 00:31:38.190 [2024-12-07 05:46:41.205686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.205984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.205993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.190 qpair failed and we were unable to recover it. 00:31:38.190 [2024-12-07 05:46:41.206369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.206559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.206568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.190 qpair failed and we were unable to recover it. 00:31:38.190 [2024-12-07 05:46:41.206874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.207182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.207191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.190 qpair failed and we were unable to recover it. 00:31:38.190 [2024-12-07 05:46:41.207494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.207692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.207701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.190 qpair failed and we were unable to recover it. 00:31:38.190 [2024-12-07 05:46:41.208015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.208323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.208332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.190 qpair failed and we were unable to recover it. 00:31:38.190 [2024-12-07 05:46:41.208620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.208950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.208959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.190 qpair failed and we were unable to recover it. 00:31:38.190 [2024-12-07 05:46:41.209260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.209568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.209578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.190 qpair failed and we were unable to recover it. 00:31:38.190 [2024-12-07 05:46:41.209882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.210192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.210201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.190 qpair failed and we were unable to recover it. 00:31:38.190 [2024-12-07 05:46:41.210480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.210762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.210771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.190 qpair failed and we were unable to recover it. 00:31:38.190 [2024-12-07 05:46:41.211062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.211364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.211373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.190 qpair failed and we were unable to recover it. 00:31:38.190 [2024-12-07 05:46:41.211537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.211811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.211821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.190 qpair failed and we were unable to recover it. 00:31:38.190 [2024-12-07 05:46:41.212153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.212424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.212433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.190 qpair failed and we were unable to recover it. 00:31:38.190 [2024-12-07 05:46:41.212730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.213027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.213036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.190 qpair failed and we were unable to recover it. 00:31:38.190 [2024-12-07 05:46:41.213222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.213502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.213511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.190 qpair failed and we were unable to recover it. 00:31:38.190 [2024-12-07 05:46:41.213839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.214131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.214141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.190 qpair failed and we were unable to recover it. 00:31:38.190 [2024-12-07 05:46:41.214416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.214737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.214746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.190 qpair failed and we were unable to recover it. 00:31:38.190 [2024-12-07 05:46:41.214950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.215288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.215298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.190 qpair failed and we were unable to recover it. 00:31:38.190 [2024-12-07 05:46:41.215579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.215860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.215869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.190 qpair failed and we were unable to recover it. 00:31:38.190 [2024-12-07 05:46:41.216151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.216452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.190 [2024-12-07 05:46:41.216461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.190 qpair failed and we were unable to recover it. 00:31:38.190 [2024-12-07 05:46:41.216773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.217073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.217083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.191 qpair failed and we were unable to recover it. 00:31:38.191 [2024-12-07 05:46:41.217386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.217699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.217709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.191 qpair failed and we were unable to recover it. 00:31:38.191 [2024-12-07 05:46:41.218045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.218355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.218364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.191 qpair failed and we were unable to recover it. 00:31:38.191 [2024-12-07 05:46:41.218674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.218985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.218994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.191 qpair failed and we were unable to recover it. 00:31:38.191 [2024-12-07 05:46:41.219362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.219663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.219672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.191 qpair failed and we were unable to recover it. 00:31:38.191 [2024-12-07 05:46:41.219981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.220272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.220282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.191 qpair failed and we were unable to recover it. 00:31:38.191 [2024-12-07 05:46:41.220572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.220893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.220902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.191 qpair failed and we were unable to recover it. 00:31:38.191 [2024-12-07 05:46:41.221211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.221540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.221549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.191 qpair failed and we were unable to recover it. 00:31:38.191 [2024-12-07 05:46:41.221713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.221981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.221991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.191 qpair failed and we were unable to recover it. 00:31:38.191 [2024-12-07 05:46:41.222312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.222512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.222521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.191 qpair failed and we were unable to recover it. 00:31:38.191 [2024-12-07 05:46:41.222856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.223184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.223194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.191 qpair failed and we were unable to recover it. 00:31:38.191 [2024-12-07 05:46:41.223493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.223806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.223816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.191 qpair failed and we were unable to recover it. 00:31:38.191 [2024-12-07 05:46:41.224030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.224306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.224315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.191 qpair failed and we were unable to recover it. 00:31:38.191 [2024-12-07 05:46:41.224638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.224930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.224940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.191 qpair failed and we were unable to recover it. 00:31:38.191 [2024-12-07 05:46:41.225244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.225559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.225569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.191 qpair failed and we were unable to recover it. 00:31:38.191 [2024-12-07 05:46:41.225872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.226204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.226215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.191 qpair failed and we were unable to recover it. 00:31:38.191 [2024-12-07 05:46:41.226488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.226777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.226786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.191 qpair failed and we were unable to recover it. 00:31:38.191 [2024-12-07 05:46:41.227087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.227289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.227298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.191 qpair failed and we were unable to recover it. 00:31:38.191 [2024-12-07 05:46:41.227486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.227751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.227760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.191 qpair failed and we were unable to recover it. 00:31:38.191 [2024-12-07 05:46:41.227868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.228071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.228082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.191 qpair failed and we were unable to recover it. 00:31:38.191 [2024-12-07 05:46:41.228389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.228705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.228715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.191 qpair failed and we were unable to recover it. 00:31:38.191 [2024-12-07 05:46:41.228969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.229264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.229273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.191 qpair failed and we were unable to recover it. 00:31:38.191 [2024-12-07 05:46:41.229569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.229878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.229888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.191 qpair failed and we were unable to recover it. 00:31:38.191 [2024-12-07 05:46:41.230177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.230473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.230482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.191 qpair failed and we were unable to recover it. 00:31:38.191 [2024-12-07 05:46:41.230757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.231077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.231086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.191 qpair failed and we were unable to recover it. 00:31:38.191 [2024-12-07 05:46:41.231401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.231700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.231712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.191 qpair failed and we were unable to recover it. 00:31:38.191 [2024-12-07 05:46:41.231993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.232313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.232322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.191 qpair failed and we were unable to recover it. 00:31:38.191 [2024-12-07 05:46:41.232604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.232927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.232937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.191 qpair failed and we were unable to recover it. 00:31:38.191 [2024-12-07 05:46:41.233295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.233582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.191 [2024-12-07 05:46:41.233592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.191 qpair failed and we were unable to recover it. 00:31:38.192 [2024-12-07 05:46:41.233894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.234221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.234231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.192 qpair failed and we were unable to recover it. 00:31:38.192 [2024-12-07 05:46:41.234559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.234843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.234852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.192 qpair failed and we were unable to recover it. 00:31:38.192 [2024-12-07 05:46:41.235159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.235484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.235494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.192 qpair failed and we were unable to recover it. 00:31:38.192 [2024-12-07 05:46:41.235799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.236082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.236092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.192 qpair failed and we were unable to recover it. 00:31:38.192 [2024-12-07 05:46:41.236302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.236644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.236653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.192 qpair failed and we were unable to recover it. 00:31:38.192 [2024-12-07 05:46:41.236939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.237254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.237263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.192 qpair failed and we were unable to recover it. 00:31:38.192 [2024-12-07 05:46:41.237568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.237903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.237913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.192 qpair failed and we were unable to recover it. 00:31:38.192 [2024-12-07 05:46:41.238088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.238451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.238460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.192 qpair failed and we were unable to recover it. 00:31:38.192 [2024-12-07 05:46:41.238772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.239082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.239091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.192 qpair failed and we were unable to recover it. 00:31:38.192 [2024-12-07 05:46:41.239379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.239671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.239680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.192 qpair failed and we were unable to recover it. 00:31:38.192 [2024-12-07 05:46:41.239960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.240263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.240273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.192 qpair failed and we were unable to recover it. 00:31:38.192 [2024-12-07 05:46:41.240555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.240884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.240893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.192 qpair failed and we were unable to recover it. 00:31:38.192 [2024-12-07 05:46:41.241176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.241507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.241516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.192 qpair failed and we were unable to recover it. 00:31:38.192 [2024-12-07 05:46:41.241720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.242082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.242093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.192 qpair failed and we were unable to recover it. 00:31:38.192 [2024-12-07 05:46:41.242405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.242714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.242723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.192 qpair failed and we were unable to recover it. 00:31:38.192 [2024-12-07 05:46:41.242907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.243207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.243218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.192 qpair failed and we were unable to recover it. 00:31:38.192 [2024-12-07 05:46:41.243525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.243815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.243825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.192 qpair failed and we were unable to recover it. 00:31:38.192 [2024-12-07 05:46:41.244144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.244336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.244346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.192 qpair failed and we were unable to recover it. 00:31:38.192 [2024-12-07 05:46:41.244663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.244984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.244994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.192 qpair failed and we were unable to recover it. 00:31:38.192 [2024-12-07 05:46:41.245308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.245499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.245508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.192 qpair failed and we were unable to recover it. 00:31:38.192 [2024-12-07 05:46:41.245798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.246117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.246128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.192 qpair failed and we were unable to recover it. 00:31:38.192 [2024-12-07 05:46:41.246322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.246652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.246667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.192 qpair failed and we were unable to recover it. 00:31:38.192 [2024-12-07 05:46:41.247007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.247182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.247192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.192 qpair failed and we were unable to recover it. 00:31:38.192 [2024-12-07 05:46:41.247544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.247859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.247869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.192 qpair failed and we were unable to recover it. 00:31:38.192 [2024-12-07 05:46:41.248182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.248344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.248354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.192 qpair failed and we were unable to recover it. 00:31:38.192 [2024-12-07 05:46:41.248630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.248961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.248971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.192 qpair failed and we were unable to recover it. 00:31:38.192 [2024-12-07 05:46:41.249297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.249600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.249610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.192 qpair failed and we were unable to recover it. 00:31:38.192 [2024-12-07 05:46:41.249912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.250218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.250228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.192 qpair failed and we were unable to recover it. 00:31:38.192 [2024-12-07 05:46:41.250540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.250860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.192 [2024-12-07 05:46:41.250869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.192 qpair failed and we were unable to recover it. 00:31:38.193 [2024-12-07 05:46:41.251151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.251469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.251478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.193 qpair failed and we were unable to recover it. 00:31:38.193 [2024-12-07 05:46:41.251786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.252036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.252046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.193 qpair failed and we were unable to recover it. 00:31:38.193 [2024-12-07 05:46:41.252329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.252615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.252624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.193 qpair failed and we were unable to recover it. 00:31:38.193 [2024-12-07 05:46:41.252918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.253227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.253237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.193 qpair failed and we were unable to recover it. 00:31:38.193 [2024-12-07 05:46:41.253542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.253892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.253901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.193 qpair failed and we were unable to recover it. 00:31:38.193 [2024-12-07 05:46:41.254280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.254575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.254584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.193 qpair failed and we were unable to recover it. 00:31:38.193 [2024-12-07 05:46:41.254788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.255144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.255154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.193 qpair failed and we were unable to recover it. 00:31:38.193 [2024-12-07 05:46:41.255460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.255826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.255835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.193 qpair failed and we were unable to recover it. 00:31:38.193 [2024-12-07 05:46:41.256173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.256383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.256392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.193 qpair failed and we were unable to recover it. 00:31:38.193 [2024-12-07 05:46:41.256710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.256993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.257002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.193 qpair failed and we were unable to recover it. 00:31:38.193 [2024-12-07 05:46:41.257333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.257644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.257654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.193 qpair failed and we were unable to recover it. 00:31:38.193 [2024-12-07 05:46:41.257950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.258228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.258239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.193 qpair failed and we were unable to recover it. 00:31:38.193 [2024-12-07 05:46:41.258514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.258837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.258846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.193 qpair failed and we were unable to recover it. 00:31:38.193 [2024-12-07 05:46:41.259153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.259468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.259479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.193 qpair failed and we were unable to recover it. 00:31:38.193 [2024-12-07 05:46:41.259788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.260105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.260115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.193 qpair failed and we were unable to recover it. 00:31:38.193 [2024-12-07 05:46:41.260427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.260642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.260651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.193 qpair failed and we were unable to recover it. 00:31:38.193 [2024-12-07 05:46:41.260827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.261120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.261129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.193 qpair failed and we were unable to recover it. 00:31:38.193 [2024-12-07 05:46:41.261406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.261691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.261700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.193 qpair failed and we were unable to recover it. 00:31:38.193 [2024-12-07 05:46:41.261897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.262234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.262248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.193 qpair failed and we were unable to recover it. 00:31:38.193 [2024-12-07 05:46:41.262536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.262853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.262862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.193 qpair failed and we were unable to recover it. 00:31:38.193 [2024-12-07 05:46:41.263150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.263478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.263488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.193 qpair failed and we were unable to recover it. 00:31:38.193 [2024-12-07 05:46:41.263768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.264060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.264070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.193 qpair failed and we were unable to recover it. 00:31:38.193 [2024-12-07 05:46:41.264384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.264719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.264728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.193 qpair failed and we were unable to recover it. 00:31:38.193 [2024-12-07 05:46:41.265034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.265351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.265360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.193 qpair failed and we were unable to recover it. 00:31:38.193 [2024-12-07 05:46:41.265658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.265817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.265828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.193 qpair failed and we were unable to recover it. 00:31:38.193 [2024-12-07 05:46:41.266020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.266321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.266330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.193 qpair failed and we were unable to recover it. 00:31:38.193 [2024-12-07 05:46:41.266635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.266946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.266955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.193 qpair failed and we were unable to recover it. 00:31:38.193 [2024-12-07 05:46:41.267152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.267461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.267471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.193 qpair failed and we were unable to recover it. 00:31:38.193 [2024-12-07 05:46:41.267773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.268088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.193 [2024-12-07 05:46:41.268097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.194 qpair failed and we were unable to recover it. 00:31:38.194 [2024-12-07 05:46:41.268411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.268707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.268716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.194 qpair failed and we were unable to recover it. 00:31:38.194 [2024-12-07 05:46:41.269019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.269331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.269340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.194 qpair failed and we were unable to recover it. 00:31:38.194 [2024-12-07 05:46:41.269649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.269834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.269844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.194 qpair failed and we were unable to recover it. 00:31:38.194 [2024-12-07 05:46:41.270120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.270443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.270453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.194 qpair failed and we were unable to recover it. 00:31:38.194 [2024-12-07 05:46:41.270754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.271045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.271054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.194 qpair failed and we were unable to recover it. 00:31:38.194 [2024-12-07 05:46:41.271382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.271645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.271654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.194 qpair failed and we were unable to recover it. 00:31:38.194 [2024-12-07 05:46:41.271958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.272262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.272271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.194 qpair failed and we were unable to recover it. 00:31:38.194 [2024-12-07 05:46:41.272596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.272881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.272890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.194 qpair failed and we were unable to recover it. 00:31:38.194 [2024-12-07 05:46:41.273184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.273477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.273486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.194 qpair failed and we were unable to recover it. 00:31:38.194 [2024-12-07 05:46:41.273792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.274107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.274117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.194 qpair failed and we were unable to recover it. 00:31:38.194 [2024-12-07 05:46:41.274357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.274539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.274548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.194 qpair failed and we were unable to recover it. 00:31:38.194 [2024-12-07 05:46:41.274834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.275145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.275155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.194 qpair failed and we were unable to recover it. 00:31:38.194 [2024-12-07 05:46:41.275460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.275763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.275773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.194 qpair failed and we were unable to recover it. 00:31:38.194 [2024-12-07 05:46:41.276100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.276410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.276419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.194 qpair failed and we were unable to recover it. 00:31:38.194 [2024-12-07 05:46:41.276723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.277017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.277027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.194 qpair failed and we were unable to recover it. 00:31:38.194 [2024-12-07 05:46:41.277405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.277717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.277726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.194 qpair failed and we were unable to recover it. 00:31:38.194 [2024-12-07 05:46:41.278019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.278343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.278352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.194 qpair failed and we were unable to recover it. 00:31:38.194 [2024-12-07 05:46:41.278632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.278913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.278922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.194 qpair failed and we were unable to recover it. 00:31:38.194 [2024-12-07 05:46:41.279227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.279545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.279554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.194 qpair failed and we were unable to recover it. 00:31:38.194 [2024-12-07 05:46:41.279838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.280127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.280137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.194 qpair failed and we were unable to recover it. 00:31:38.194 [2024-12-07 05:46:41.280439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.280735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.280745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.194 qpair failed and we were unable to recover it. 00:31:38.194 [2024-12-07 05:46:41.281008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.281296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.281306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.194 qpair failed and we were unable to recover it. 00:31:38.194 [2024-12-07 05:46:41.281523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.281847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.194 [2024-12-07 05:46:41.281856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.195 qpair failed and we were unable to recover it. 00:31:38.195 [2024-12-07 05:46:41.282185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.282464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.282474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.195 qpair failed and we were unable to recover it. 00:31:38.195 [2024-12-07 05:46:41.282658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.282965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.282974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.195 qpair failed and we were unable to recover it. 00:31:38.195 [2024-12-07 05:46:41.283181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.283461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.283470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.195 qpair failed and we were unable to recover it. 00:31:38.195 [2024-12-07 05:46:41.283773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.284078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.284088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.195 qpair failed and we were unable to recover it. 00:31:38.195 [2024-12-07 05:46:41.284384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.284695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.284704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.195 qpair failed and we were unable to recover it. 00:31:38.195 [2024-12-07 05:46:41.285006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.285373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.285383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.195 qpair failed and we were unable to recover it. 00:31:38.195 [2024-12-07 05:46:41.285663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.285855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.285864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.195 qpair failed and we were unable to recover it. 00:31:38.195 [2024-12-07 05:46:41.286191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.286464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.286475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.195 qpair failed and we were unable to recover it. 00:31:38.195 [2024-12-07 05:46:41.286756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.286924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.286935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.195 qpair failed and we were unable to recover it. 00:31:38.195 [2024-12-07 05:46:41.287240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.287554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.287564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.195 qpair failed and we were unable to recover it. 00:31:38.195 [2024-12-07 05:46:41.287870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.288199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.288208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.195 qpair failed and we were unable to recover it. 00:31:38.195 [2024-12-07 05:46:41.288515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.288813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.288822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.195 qpair failed and we were unable to recover it. 00:31:38.195 [2024-12-07 05:46:41.289112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.289447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.289456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.195 qpair failed and we were unable to recover it. 00:31:38.195 [2024-12-07 05:46:41.289738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.290019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.290028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.195 qpair failed and we were unable to recover it. 00:31:38.195 [2024-12-07 05:46:41.290325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.290613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.290622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.195 qpair failed and we were unable to recover it. 00:31:38.195 [2024-12-07 05:46:41.290921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.291110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.291121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.195 qpair failed and we were unable to recover it. 00:31:38.195 [2024-12-07 05:46:41.291416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.291638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.291648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.195 qpair failed and we were unable to recover it. 00:31:38.195 [2024-12-07 05:46:41.291945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.292239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.292248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.195 qpair failed and we were unable to recover it. 00:31:38.195 [2024-12-07 05:46:41.292576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.292881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.292890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.195 qpair failed and we were unable to recover it. 00:31:38.195 [2024-12-07 05:46:41.293177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.293357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.293368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.195 qpair failed and we were unable to recover it. 00:31:38.195 [2024-12-07 05:46:41.293692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.293884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.293893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.195 qpair failed and we were unable to recover it. 00:31:38.195 [2024-12-07 05:46:41.294169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.294464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.294473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.195 qpair failed and we were unable to recover it. 00:31:38.195 [2024-12-07 05:46:41.294757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.295022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.295031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.195 qpair failed and we were unable to recover it. 00:31:38.195 [2024-12-07 05:46:41.295246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.295569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.295578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.195 qpair failed and we were unable to recover it. 00:31:38.195 [2024-12-07 05:46:41.295856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.296079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.296088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.195 qpair failed and we were unable to recover it. 00:31:38.195 [2024-12-07 05:46:41.296402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.296690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.296700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.195 qpair failed and we were unable to recover it. 00:31:38.195 [2024-12-07 05:46:41.297009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.297288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.297297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.195 qpair failed and we were unable to recover it. 00:31:38.195 [2024-12-07 05:46:41.297576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.297903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.297912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.195 qpair failed and we were unable to recover it. 00:31:38.195 [2024-12-07 05:46:41.298268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.195 [2024-12-07 05:46:41.298581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.298590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.196 qpair failed and we were unable to recover it. 00:31:38.196 [2024-12-07 05:46:41.298903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.299199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.299209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.196 qpair failed and we were unable to recover it. 00:31:38.196 [2024-12-07 05:46:41.299494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.299687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.299696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.196 qpair failed and we were unable to recover it. 00:31:38.196 [2024-12-07 05:46:41.300024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.300334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.300344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.196 qpair failed and we were unable to recover it. 00:31:38.196 [2024-12-07 05:46:41.300704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.300995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.301004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.196 qpair failed and we were unable to recover it. 00:31:38.196 [2024-12-07 05:46:41.301339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.301539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.301548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.196 qpair failed and we were unable to recover it. 00:31:38.196 [2024-12-07 05:46:41.301762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.301961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.301970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.196 qpair failed and we were unable to recover it. 00:31:38.196 [2024-12-07 05:46:41.302276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.302594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.302603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.196 qpair failed and we were unable to recover it. 00:31:38.196 [2024-12-07 05:46:41.302902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.303196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.303206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.196 qpair failed and we were unable to recover it. 00:31:38.196 [2024-12-07 05:46:41.303393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.303722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.303732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.196 qpair failed and we were unable to recover it. 00:31:38.196 [2024-12-07 05:46:41.304053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.304344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.304353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.196 qpair failed and we were unable to recover it. 00:31:38.196 [2024-12-07 05:46:41.304644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.304924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.304933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.196 qpair failed and we were unable to recover it. 00:31:38.196 [2024-12-07 05:46:41.305298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.305480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.305489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.196 qpair failed and we were unable to recover it. 00:31:38.196 [2024-12-07 05:46:41.305681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.305860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.305870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.196 qpair failed and we were unable to recover it. 00:31:38.196 [2024-12-07 05:46:41.306195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.306360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.306370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.196 qpair failed and we were unable to recover it. 00:31:38.196 [2024-12-07 05:46:41.306686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.306987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.306997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.196 qpair failed and we were unable to recover it. 00:31:38.196 [2024-12-07 05:46:41.307195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.307534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.307543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.196 qpair failed and we were unable to recover it. 00:31:38.196 [2024-12-07 05:46:41.307848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.308151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.308160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.196 qpair failed and we were unable to recover it. 00:31:38.196 [2024-12-07 05:46:41.308469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.308788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.308797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.196 qpair failed and we were unable to recover it. 00:31:38.196 [2024-12-07 05:46:41.309102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.309414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.309424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.196 qpair failed and we were unable to recover it. 00:31:38.196 [2024-12-07 05:46:41.309710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.309913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.309922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.196 qpair failed and we were unable to recover it. 00:31:38.196 [2024-12-07 05:46:41.310219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.310566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.310576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.196 qpair failed and we were unable to recover it. 00:31:38.196 [2024-12-07 05:46:41.310864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.311058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.311068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.196 qpair failed and we were unable to recover it. 00:31:38.196 [2024-12-07 05:46:41.311400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.311579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.311590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.196 qpair failed and we were unable to recover it. 00:31:38.196 [2024-12-07 05:46:41.311920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.312224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.312233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.196 qpair failed and we were unable to recover it. 00:31:38.196 [2024-12-07 05:46:41.312540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.312834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.312843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.196 qpair failed and we were unable to recover it. 00:31:38.196 [2024-12-07 05:46:41.313123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.313492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.313501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.196 qpair failed and we were unable to recover it. 00:31:38.196 [2024-12-07 05:46:41.313829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.314101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.314111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.196 qpair failed and we were unable to recover it. 00:31:38.196 [2024-12-07 05:46:41.314287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.314518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.314528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.196 qpair failed and we were unable to recover it. 00:31:38.196 [2024-12-07 05:46:41.314831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.196 [2024-12-07 05:46:41.315155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.315165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.197 qpair failed and we were unable to recover it. 00:31:38.197 [2024-12-07 05:46:41.315492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.315808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.315820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.197 qpair failed and we were unable to recover it. 00:31:38.197 [2024-12-07 05:46:41.316123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.316420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.316429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.197 qpair failed and we were unable to recover it. 00:31:38.197 [2024-12-07 05:46:41.316730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.317044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.317053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.197 qpair failed and we were unable to recover it. 00:31:38.197 [2024-12-07 05:46:41.317338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.317690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.317699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.197 qpair failed and we were unable to recover it. 00:31:38.197 [2024-12-07 05:46:41.317979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.318282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.318291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.197 qpair failed and we were unable to recover it. 00:31:38.197 [2024-12-07 05:46:41.318590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.318881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.318890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.197 qpair failed and we were unable to recover it. 00:31:38.197 [2024-12-07 05:46:41.319194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.319514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.319523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.197 qpair failed and we were unable to recover it. 00:31:38.197 [2024-12-07 05:46:41.319682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.320014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.320025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.197 qpair failed and we were unable to recover it. 00:31:38.197 [2024-12-07 05:46:41.320241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.320572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.320589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.197 qpair failed and we were unable to recover it. 00:31:38.197 [2024-12-07 05:46:41.320791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.321092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.321102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.197 qpair failed and we were unable to recover it. 00:31:38.197 [2024-12-07 05:46:41.321416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.321695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.321704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.197 qpair failed and we were unable to recover it. 00:31:38.197 [2024-12-07 05:46:41.322020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.322344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.322353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.197 qpair failed and we were unable to recover it. 00:31:38.197 [2024-12-07 05:46:41.322676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.322994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.323003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.197 qpair failed and we were unable to recover it. 00:31:38.197 [2024-12-07 05:46:41.323285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.323605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.323615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.197 qpair failed and we were unable to recover it. 00:31:38.197 [2024-12-07 05:46:41.323895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.324226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.324236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.197 qpair failed and we were unable to recover it. 00:31:38.197 [2024-12-07 05:46:41.324525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.324835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.324845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.197 qpair failed and we were unable to recover it. 00:31:38.197 [2024-12-07 05:46:41.325149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.325548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.325558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.197 qpair failed and we were unable to recover it. 00:31:38.197 [2024-12-07 05:46:41.325814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.326152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.326163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.197 qpair failed and we were unable to recover it. 00:31:38.197 [2024-12-07 05:46:41.326476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.326658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.326668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.197 qpair failed and we were unable to recover it. 00:31:38.197 [2024-12-07 05:46:41.326866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.327048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.327057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.197 qpair failed and we were unable to recover it. 00:31:38.197 [2024-12-07 05:46:41.327268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.327537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.327547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.197 qpair failed and we were unable to recover it. 00:31:38.197 [2024-12-07 05:46:41.327819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.328188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.328198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.197 qpair failed and we were unable to recover it. 00:31:38.197 [2024-12-07 05:46:41.328377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.328683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.328692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.197 qpair failed and we were unable to recover it. 00:31:38.197 [2024-12-07 05:46:41.328914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.329255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.329265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.197 qpair failed and we were unable to recover it. 00:31:38.197 [2024-12-07 05:46:41.329564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.329878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.329888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.197 qpair failed and we were unable to recover it. 00:31:38.197 [2024-12-07 05:46:41.330185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.330485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.330494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.197 qpair failed and we were unable to recover it. 00:31:38.197 [2024-12-07 05:46:41.330793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.331076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.331086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.197 qpair failed and we were unable to recover it. 00:31:38.197 [2024-12-07 05:46:41.331385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.331705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.197 [2024-12-07 05:46:41.331714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.197 qpair failed and we were unable to recover it. 00:31:38.197 [2024-12-07 05:46:41.332035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.332366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.332375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.198 qpair failed and we were unable to recover it. 00:31:38.198 [2024-12-07 05:46:41.332685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.332977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.332986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.198 qpair failed and we were unable to recover it. 00:31:38.198 [2024-12-07 05:46:41.333270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.333519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.333528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.198 qpair failed and we were unable to recover it. 00:31:38.198 [2024-12-07 05:46:41.333837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.334166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.334176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.198 qpair failed and we were unable to recover it. 00:31:38.198 [2024-12-07 05:46:41.334239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.334418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.334427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.198 qpair failed and we were unable to recover it. 00:31:38.198 [2024-12-07 05:46:41.334806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.335124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.335135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.198 qpair failed and we were unable to recover it. 00:31:38.198 [2024-12-07 05:46:41.335431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.335722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.335731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.198 qpair failed and we were unable to recover it. 00:31:38.198 [2024-12-07 05:46:41.335937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.336225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.336235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.198 qpair failed and we were unable to recover it. 00:31:38.198 [2024-12-07 05:46:41.336521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.336846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.336856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.198 qpair failed and we were unable to recover it. 00:31:38.198 [2024-12-07 05:46:41.337164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.337363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.337372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.198 qpair failed and we were unable to recover it. 00:31:38.198 [2024-12-07 05:46:41.337657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.337836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.337847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.198 qpair failed and we were unable to recover it. 00:31:38.198 [2024-12-07 05:46:41.338098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.338396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.338405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.198 qpair failed and we were unable to recover it. 00:31:38.198 [2024-12-07 05:46:41.338735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.339067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.339077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.198 qpair failed and we were unable to recover it. 00:31:38.198 [2024-12-07 05:46:41.339381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.339667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.339676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.198 qpair failed and we were unable to recover it. 00:31:38.198 [2024-12-07 05:46:41.339940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.340243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.340253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.198 qpair failed and we were unable to recover it. 00:31:38.198 [2024-12-07 05:46:41.340449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.340762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.340772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.198 qpair failed and we were unable to recover it. 00:31:38.198 [2024-12-07 05:46:41.341083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.341389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.341398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.198 qpair failed and we were unable to recover it. 00:31:38.198 [2024-12-07 05:46:41.341695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.341890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.341899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.198 qpair failed and we were unable to recover it. 00:31:38.198 [2024-12-07 05:46:41.342249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.342420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.342430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.198 qpair failed and we were unable to recover it. 00:31:38.198 [2024-12-07 05:46:41.342808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.343099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.343109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.198 qpair failed and we were unable to recover it. 00:31:38.198 [2024-12-07 05:46:41.343437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.343641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.343650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.198 qpair failed and we were unable to recover it. 00:31:38.198 [2024-12-07 05:46:41.343978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.344139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.344151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.198 qpair failed and we were unable to recover it. 00:31:38.198 [2024-12-07 05:46:41.344449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.344755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.344766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.198 qpair failed and we were unable to recover it. 00:31:38.198 [2024-12-07 05:46:41.345056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.345361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.345373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.198 qpair failed and we were unable to recover it. 00:31:38.198 [2024-12-07 05:46:41.345701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.346019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.346029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.198 qpair failed and we were unable to recover it. 00:31:38.198 [2024-12-07 05:46:41.346338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.346668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.346677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.198 qpair failed and we were unable to recover it. 00:31:38.198 [2024-12-07 05:46:41.346974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.347120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.347130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.198 qpair failed and we were unable to recover it. 00:31:38.198 [2024-12-07 05:46:41.347411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.347738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.347748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.198 qpair failed and we were unable to recover it. 00:31:38.198 [2024-12-07 05:46:41.348046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.348358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.198 [2024-12-07 05:46:41.348367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.198 qpair failed and we were unable to recover it. 00:31:38.198 [2024-12-07 05:46:41.348676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.348978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.348988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.199 qpair failed and we were unable to recover it. 00:31:38.199 [2024-12-07 05:46:41.349290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.349486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.349497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.199 qpair failed and we were unable to recover it. 00:31:38.199 [2024-12-07 05:46:41.349851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.350038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.350048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.199 qpair failed and we were unable to recover it. 00:31:38.199 [2024-12-07 05:46:41.350341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.350665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.350674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.199 qpair failed and we were unable to recover it. 00:31:38.199 [2024-12-07 05:46:41.350990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.351311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.351321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.199 qpair failed and we were unable to recover it. 00:31:38.199 [2024-12-07 05:46:41.351611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.351889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.351898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.199 qpair failed and we were unable to recover it. 00:31:38.199 [2024-12-07 05:46:41.352212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.352504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.352513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.199 qpair failed and we were unable to recover it. 00:31:38.199 [2024-12-07 05:46:41.352736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.352936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.352945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.199 qpair failed and we were unable to recover it. 00:31:38.199 [2024-12-07 05:46:41.353153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.353532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.353541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.199 qpair failed and we were unable to recover it. 00:31:38.199 [2024-12-07 05:46:41.353828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.353966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.353976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.199 qpair failed and we were unable to recover it. 00:31:38.199 [2024-12-07 05:46:41.354253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.354466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.354476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.199 qpair failed and we were unable to recover it. 00:31:38.199 [2024-12-07 05:46:41.354843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.355151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.355161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.199 qpair failed and we were unable to recover it. 00:31:38.199 [2024-12-07 05:46:41.355375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.355681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.355690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.199 qpair failed and we were unable to recover it. 00:31:38.199 [2024-12-07 05:46:41.355992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.356174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.356183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.199 qpair failed and we were unable to recover it. 00:31:38.199 [2024-12-07 05:46:41.356357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.356616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.356626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.199 qpair failed and we were unable to recover it. 00:31:38.199 [2024-12-07 05:46:41.356935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.357255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.357265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.199 qpair failed and we were unable to recover it. 00:31:38.199 [2024-12-07 05:46:41.357571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.357900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.357910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.199 qpair failed and we were unable to recover it. 00:31:38.199 [2024-12-07 05:46:41.358087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.358461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.358470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.199 qpair failed and we were unable to recover it. 00:31:38.199 [2024-12-07 05:46:41.358843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.359108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.359117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.199 qpair failed and we were unable to recover it. 00:31:38.199 [2024-12-07 05:46:41.359446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.359736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.199 [2024-12-07 05:46:41.359745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.199 qpair failed and we were unable to recover it. 00:31:38.199 [2024-12-07 05:46:41.360030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.360335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.360344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.200 qpair failed and we were unable to recover it. 00:31:38.200 [2024-12-07 05:46:41.360634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.360831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.360840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.200 qpair failed and we were unable to recover it. 00:31:38.200 [2024-12-07 05:46:41.361165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.361468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.361477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.200 qpair failed and we were unable to recover it. 00:31:38.200 [2024-12-07 05:46:41.361805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.362103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.362113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.200 qpair failed and we were unable to recover it. 00:31:38.200 [2024-12-07 05:46:41.362484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.362788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.362797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.200 qpair failed and we were unable to recover it. 00:31:38.200 [2024-12-07 05:46:41.363125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.363449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.363459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.200 qpair failed and we were unable to recover it. 00:31:38.200 [2024-12-07 05:46:41.363766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.364045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.364056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.200 qpair failed and we were unable to recover it. 00:31:38.200 [2024-12-07 05:46:41.364250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.364569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.364578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.200 qpair failed and we were unable to recover it. 00:31:38.200 [2024-12-07 05:46:41.364883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.365231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.365240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.200 qpair failed and we were unable to recover it. 00:31:38.200 [2024-12-07 05:46:41.365541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.365793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.365802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.200 qpair failed and we were unable to recover it. 00:31:38.200 [2024-12-07 05:46:41.366121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.366430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.366439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.200 qpair failed and we were unable to recover it. 00:31:38.200 [2024-12-07 05:46:41.366840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.367104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.367114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.200 qpair failed and we were unable to recover it. 00:31:38.200 [2024-12-07 05:46:41.367430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.367749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.367758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.200 qpair failed and we were unable to recover it. 00:31:38.200 [2024-12-07 05:46:41.368052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.368370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.368379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.200 qpair failed and we were unable to recover it. 00:31:38.200 [2024-12-07 05:46:41.368663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.368984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.368993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.200 qpair failed and we were unable to recover it. 00:31:38.200 [2024-12-07 05:46:41.369288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.369567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.369578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.200 qpair failed and we were unable to recover it. 00:31:38.200 [2024-12-07 05:46:41.369950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.370260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.370270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.200 qpair failed and we were unable to recover it. 00:31:38.200 [2024-12-07 05:46:41.370556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.370842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.370851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.200 qpair failed and we were unable to recover it. 00:31:38.200 [2024-12-07 05:46:41.371070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.371342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.371351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.200 qpair failed and we were unable to recover it. 00:31:38.200 [2024-12-07 05:46:41.371635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.371910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.371919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.200 qpair failed and we were unable to recover it. 00:31:38.200 [2024-12-07 05:46:41.372226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.372534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.372543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.200 qpair failed and we were unable to recover it. 00:31:38.200 [2024-12-07 05:46:41.372836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.373118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.373128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.200 qpair failed and we were unable to recover it. 00:31:38.200 [2024-12-07 05:46:41.373302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.373542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.373552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.200 qpair failed and we were unable to recover it. 00:31:38.200 [2024-12-07 05:46:41.373862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.374173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.374183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.200 qpair failed and we were unable to recover it. 00:31:38.200 [2024-12-07 05:46:41.374386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.374712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.374722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.200 qpair failed and we were unable to recover it. 00:31:38.200 [2024-12-07 05:46:41.375016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.375340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.375349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.200 qpair failed and we were unable to recover it. 00:31:38.200 [2024-12-07 05:46:41.375656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.375989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.375998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.200 qpair failed and we were unable to recover it. 00:31:38.200 [2024-12-07 05:46:41.376326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.376642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.376651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.200 qpair failed and we were unable to recover it. 00:31:38.200 [2024-12-07 05:46:41.376960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.377296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.200 [2024-12-07 05:46:41.377305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.200 qpair failed and we were unable to recover it. 00:31:38.201 [2024-12-07 05:46:41.377585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.377877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.377886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.201 qpair failed and we were unable to recover it. 00:31:38.201 [2024-12-07 05:46:41.378175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.378497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.378507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.201 qpair failed and we were unable to recover it. 00:31:38.201 [2024-12-07 05:46:41.378807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.379084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.379094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.201 qpair failed and we were unable to recover it. 00:31:38.201 [2024-12-07 05:46:41.379406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.379693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.379703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.201 qpair failed and we were unable to recover it. 00:31:38.201 [2024-12-07 05:46:41.380028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.380298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.380315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.201 qpair failed and we were unable to recover it. 00:31:38.201 [2024-12-07 05:46:41.380614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.380908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.380917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.201 qpair failed and we were unable to recover it. 00:31:38.201 [2024-12-07 05:46:41.381193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.381517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.381526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.201 qpair failed and we were unable to recover it. 00:31:38.201 [2024-12-07 05:46:41.381831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.382130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.382139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.201 qpair failed and we were unable to recover it. 00:31:38.201 [2024-12-07 05:46:41.382444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.382750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.382759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.201 qpair failed and we were unable to recover it. 00:31:38.201 [2024-12-07 05:46:41.383046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.383231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.383241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.201 qpair failed and we were unable to recover it. 00:31:38.201 [2024-12-07 05:46:41.383486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.383792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.383801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.201 qpair failed and we were unable to recover it. 00:31:38.201 [2024-12-07 05:46:41.384102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.384270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.384281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.201 qpair failed and we were unable to recover it. 00:31:38.201 [2024-12-07 05:46:41.384572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.384886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.384896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.201 qpair failed and we were unable to recover it. 00:31:38.201 [2024-12-07 05:46:41.385221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.385434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.385444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.201 qpair failed and we were unable to recover it. 00:31:38.201 [2024-12-07 05:46:41.385656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.385842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.385851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.201 qpair failed and we were unable to recover it. 00:31:38.201 [2024-12-07 05:46:41.386221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.386538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.386547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.201 qpair failed and we were unable to recover it. 00:31:38.201 [2024-12-07 05:46:41.386863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.387154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.387164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.201 qpair failed and we were unable to recover it. 00:31:38.201 [2024-12-07 05:46:41.387366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.387673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.387682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.201 qpair failed and we were unable to recover it. 00:31:38.201 [2024-12-07 05:46:41.387991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.388311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.388321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.201 qpair failed and we were unable to recover it. 00:31:38.201 [2024-12-07 05:46:41.388602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.388916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.388926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.201 qpair failed and we were unable to recover it. 00:31:38.201 [2024-12-07 05:46:41.389236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.389531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.389541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.201 qpair failed and we were unable to recover it. 00:31:38.201 [2024-12-07 05:46:41.389837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.390189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.390200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.201 qpair failed and we were unable to recover it. 00:31:38.201 [2024-12-07 05:46:41.390454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.390785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.390796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.201 qpair failed and we were unable to recover it. 00:31:38.201 [2024-12-07 05:46:41.391193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.391370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.391379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.201 qpair failed and we were unable to recover it. 00:31:38.201 [2024-12-07 05:46:41.391673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.391956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.391966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.201 qpair failed and we were unable to recover it. 00:31:38.201 [2024-12-07 05:46:41.392277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.392615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.392624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.201 qpair failed and we were unable to recover it. 00:31:38.201 [2024-12-07 05:46:41.392783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.392987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.392996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.201 qpair failed and we were unable to recover it. 00:31:38.201 [2024-12-07 05:46:41.393309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.393586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.201 [2024-12-07 05:46:41.393595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.201 qpair failed and we were unable to recover it. 00:31:38.201 [2024-12-07 05:46:41.393884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.394054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.394065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.202 qpair failed and we were unable to recover it. 00:31:38.202 [2024-12-07 05:46:41.394338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.394650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.394659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.202 qpair failed and we were unable to recover it. 00:31:38.202 [2024-12-07 05:46:41.394921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.395293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.395302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.202 qpair failed and we were unable to recover it. 00:31:38.202 [2024-12-07 05:46:41.395582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.395860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.395870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.202 qpair failed and we were unable to recover it. 00:31:38.202 [2024-12-07 05:46:41.396065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.396395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.396405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.202 qpair failed and we were unable to recover it. 00:31:38.202 [2024-12-07 05:46:41.396706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.397040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.397050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.202 qpair failed and we were unable to recover it. 00:31:38.202 [2024-12-07 05:46:41.397211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.397414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.397423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.202 qpair failed and we were unable to recover it. 00:31:38.202 [2024-12-07 05:46:41.397710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.397983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.397992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.202 qpair failed and we were unable to recover it. 00:31:38.202 [2024-12-07 05:46:41.398373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.398555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.398564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.202 qpair failed and we were unable to recover it. 00:31:38.202 [2024-12-07 05:46:41.398867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.399051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.399063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.202 qpair failed and we were unable to recover it. 00:31:38.202 [2024-12-07 05:46:41.399353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.399661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.399670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.202 qpair failed and we were unable to recover it. 00:31:38.202 [2024-12-07 05:46:41.399970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.400335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.400344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.202 qpair failed and we were unable to recover it. 00:31:38.202 [2024-12-07 05:46:41.400632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.400918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.400927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.202 qpair failed and we were unable to recover it. 00:31:38.202 [2024-12-07 05:46:41.401152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.401373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.401383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.202 qpair failed and we were unable to recover it. 00:31:38.202 [2024-12-07 05:46:41.401687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.401999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.402008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.202 qpair failed and we were unable to recover it. 00:31:38.202 [2024-12-07 05:46:41.402324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.402677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.402685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.202 qpair failed and we were unable to recover it. 00:31:38.202 [2024-12-07 05:46:41.402995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.403320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.403329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.202 qpair failed and we were unable to recover it. 00:31:38.202 [2024-12-07 05:46:41.403607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.403905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.403913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.202 qpair failed and we were unable to recover it. 00:31:38.202 [2024-12-07 05:46:41.404132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.404393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.404402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.202 qpair failed and we were unable to recover it. 00:31:38.202 [2024-12-07 05:46:41.404713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.405050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.405059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.202 qpair failed and we were unable to recover it. 00:31:38.202 [2024-12-07 05:46:41.405370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.405681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.405689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.202 qpair failed and we were unable to recover it. 00:31:38.202 [2024-12-07 05:46:41.405996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.406314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.406323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.202 qpair failed and we were unable to recover it. 00:31:38.202 [2024-12-07 05:46:41.406630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.406812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.406821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.202 qpair failed and we were unable to recover it. 00:31:38.202 [2024-12-07 05:46:41.407126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.407436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.407445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.202 qpair failed and we were unable to recover it. 00:31:38.202 [2024-12-07 05:46:41.407736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.408053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.408064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.202 qpair failed and we were unable to recover it. 00:31:38.202 [2024-12-07 05:46:41.408384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.408698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.408707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.202 qpair failed and we were unable to recover it. 00:31:38.202 [2024-12-07 05:46:41.409018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.409230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.409239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.202 qpair failed and we were unable to recover it. 00:31:38.202 [2024-12-07 05:46:41.409568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.409858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.409867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.202 qpair failed and we were unable to recover it. 00:31:38.202 [2024-12-07 05:46:41.410236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.202 [2024-12-07 05:46:41.410514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.203 [2024-12-07 05:46:41.410522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.203 qpair failed and we were unable to recover it. 00:31:38.203 [2024-12-07 05:46:41.410837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.203 [2024-12-07 05:46:41.411164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.203 [2024-12-07 05:46:41.411173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.203 qpair failed and we were unable to recover it. 00:31:38.203 [2024-12-07 05:46:41.411477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.203 [2024-12-07 05:46:41.411744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.203 [2024-12-07 05:46:41.411753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.203 qpair failed and we were unable to recover it. 00:31:38.203 [2024-12-07 05:46:41.412059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.203 [2024-12-07 05:46:41.412343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.203 [2024-12-07 05:46:41.412352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.203 qpair failed and we were unable to recover it. 00:31:38.203 [2024-12-07 05:46:41.412673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.203 [2024-12-07 05:46:41.412971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.203 [2024-12-07 05:46:41.412979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.203 qpair failed and we were unable to recover it. 00:31:38.468 [2024-12-07 05:46:41.413303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.413622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.413631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.468 qpair failed and we were unable to recover it. 00:31:38.468 [2024-12-07 05:46:41.413826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.414153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.414162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.468 qpair failed and we were unable to recover it. 00:31:38.468 [2024-12-07 05:46:41.414377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.414596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.414605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.468 qpair failed and we were unable to recover it. 00:31:38.468 [2024-12-07 05:46:41.414920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.415180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.415190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.468 qpair failed and we were unable to recover it. 00:31:38.468 [2024-12-07 05:46:41.415404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.415724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.415733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.468 qpair failed and we were unable to recover it. 00:31:38.468 [2024-12-07 05:46:41.415925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.416151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.416160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.468 qpair failed and we were unable to recover it. 00:31:38.468 [2024-12-07 05:46:41.416500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.416691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.416700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.468 qpair failed and we were unable to recover it. 00:31:38.468 [2024-12-07 05:46:41.417017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.417322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.417332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.468 qpair failed and we were unable to recover it. 00:31:38.468 [2024-12-07 05:46:41.417711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.418002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.418019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.468 qpair failed and we were unable to recover it. 00:31:38.468 [2024-12-07 05:46:41.418335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.418655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.418664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.468 qpair failed and we were unable to recover it. 00:31:38.468 [2024-12-07 05:46:41.418975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.419376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.419385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.468 qpair failed and we were unable to recover it. 00:31:38.468 [2024-12-07 05:46:41.419662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.420019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.420029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.468 qpair failed and we were unable to recover it. 00:31:38.468 [2024-12-07 05:46:41.420309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.420608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.420617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.468 qpair failed and we were unable to recover it. 00:31:38.468 [2024-12-07 05:46:41.420921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.421109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.421118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.468 qpair failed and we were unable to recover it. 00:31:38.468 [2024-12-07 05:46:41.421191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.421521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.421530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.468 qpair failed and we were unable to recover it. 00:31:38.468 [2024-12-07 05:46:41.421827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.422140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.422150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.468 qpair failed and we were unable to recover it. 00:31:38.468 [2024-12-07 05:46:41.422461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.422760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.422769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.468 qpair failed and we were unable to recover it. 00:31:38.468 [2024-12-07 05:46:41.423077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.423274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.423283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.468 qpair failed and we were unable to recover it. 00:31:38.468 [2024-12-07 05:46:41.423480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.468 [2024-12-07 05:46:41.423746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.423755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.469 qpair failed and we were unable to recover it. 00:31:38.469 [2024-12-07 05:46:41.424047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.424220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.424229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.469 qpair failed and we were unable to recover it. 00:31:38.469 [2024-12-07 05:46:41.424437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.424748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.424756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.469 qpair failed and we were unable to recover it. 00:31:38.469 [2024-12-07 05:46:41.425063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.425426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.425435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.469 qpair failed and we were unable to recover it. 00:31:38.469 [2024-12-07 05:46:41.425758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.426073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.426083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.469 qpair failed and we were unable to recover it. 00:31:38.469 [2024-12-07 05:46:41.426390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.426565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.426574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.469 qpair failed and we were unable to recover it. 00:31:38.469 [2024-12-07 05:46:41.426883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.427180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.427189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.469 qpair failed and we were unable to recover it. 00:31:38.469 [2024-12-07 05:46:41.427510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.427840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.427849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.469 qpair failed and we were unable to recover it. 00:31:38.469 [2024-12-07 05:46:41.428156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.428469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.428478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.469 qpair failed and we were unable to recover it. 00:31:38.469 [2024-12-07 05:46:41.428801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.429099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.429114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.469 qpair failed and we were unable to recover it. 00:31:38.469 [2024-12-07 05:46:41.429433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.429743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.429752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.469 qpair failed and we were unable to recover it. 00:31:38.469 [2024-12-07 05:46:41.430057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.430282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.430290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.469 qpair failed and we were unable to recover it. 00:31:38.469 [2024-12-07 05:46:41.430555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.430864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.430873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.469 qpair failed and we were unable to recover it. 00:31:38.469 [2024-12-07 05:46:41.431205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.431518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.431527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.469 qpair failed and we were unable to recover it. 00:31:38.469 [2024-12-07 05:46:41.431830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.432124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.432133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.469 qpair failed and we were unable to recover it. 00:31:38.469 [2024-12-07 05:46:41.432311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.432571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.432580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.469 qpair failed and we were unable to recover it. 00:31:38.469 [2024-12-07 05:46:41.432761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.432944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.432952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.469 qpair failed and we were unable to recover it. 00:31:38.469 [2024-12-07 05:46:41.433245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.433414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.433424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.469 qpair failed and we were unable to recover it. 00:31:38.469 [2024-12-07 05:46:41.433750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.433944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.433953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.469 qpair failed and we were unable to recover it. 00:31:38.469 [2024-12-07 05:46:41.434241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.434643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.434651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.469 qpair failed and we were unable to recover it. 00:31:38.469 [2024-12-07 05:46:41.434953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.435256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.435265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.469 qpair failed and we were unable to recover it. 00:31:38.469 [2024-12-07 05:46:41.435543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.435876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.435885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.469 qpair failed and we were unable to recover it. 00:31:38.469 [2024-12-07 05:46:41.436175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.436538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.436547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.469 qpair failed and we were unable to recover it. 00:31:38.469 [2024-12-07 05:46:41.436798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.437131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.437141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.469 qpair failed and we were unable to recover it. 00:31:38.469 [2024-12-07 05:46:41.437412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.437731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.437740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.469 qpair failed and we were unable to recover it. 00:31:38.469 [2024-12-07 05:46:41.437911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.438190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.438200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.469 qpair failed and we were unable to recover it. 00:31:38.469 [2024-12-07 05:46:41.438527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.438860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.438869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.469 qpair failed and we were unable to recover it. 00:31:38.469 [2024-12-07 05:46:41.439194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.439503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.439513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.469 qpair failed and we were unable to recover it. 00:31:38.469 [2024-12-07 05:46:41.439816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.440124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.469 [2024-12-07 05:46:41.440133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.469 qpair failed and we were unable to recover it. 00:31:38.469 [2024-12-07 05:46:41.440458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.440737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.440746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.470 qpair failed and we were unable to recover it. 00:31:38.470 [2024-12-07 05:46:41.441059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.441299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.441308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.470 qpair failed and we were unable to recover it. 00:31:38.470 [2024-12-07 05:46:41.441616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.441949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.441958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.470 qpair failed and we were unable to recover it. 00:31:38.470 [2024-12-07 05:46:41.442261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.442582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.442591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.470 qpair failed and we were unable to recover it. 00:31:38.470 [2024-12-07 05:46:41.442754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.443072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.443081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.470 qpair failed and we were unable to recover it. 00:31:38.470 [2024-12-07 05:46:41.443433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.443708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.443717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.470 qpair failed and we were unable to recover it. 00:31:38.470 [2024-12-07 05:46:41.444062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.444397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.444405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.470 qpair failed and we were unable to recover it. 00:31:38.470 [2024-12-07 05:46:41.444701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.445019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.445029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.470 qpair failed and we were unable to recover it. 00:31:38.470 [2024-12-07 05:46:41.445345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.445678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.445686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.470 qpair failed and we were unable to recover it. 00:31:38.470 [2024-12-07 05:46:41.445991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.446287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.446296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.470 qpair failed and we were unable to recover it. 00:31:38.470 [2024-12-07 05:46:41.446596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.446936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.446944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.470 qpair failed and we were unable to recover it. 00:31:38.470 [2024-12-07 05:46:41.447247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.447556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.447565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.470 qpair failed and we were unable to recover it. 00:31:38.470 [2024-12-07 05:46:41.447850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.448164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.448174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.470 qpair failed and we were unable to recover it. 00:31:38.470 [2024-12-07 05:46:41.448441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.448762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.448771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.470 qpair failed and we were unable to recover it. 00:31:38.470 [2024-12-07 05:46:41.449076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.449372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.449381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.470 qpair failed and we were unable to recover it. 00:31:38.470 [2024-12-07 05:46:41.449689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.450005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.450019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.470 qpair failed and we were unable to recover it. 00:31:38.470 [2024-12-07 05:46:41.450357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.450659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.450668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.470 qpair failed and we were unable to recover it. 00:31:38.470 [2024-12-07 05:46:41.450974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.451286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.451296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.470 qpair failed and we were unable to recover it. 00:31:38.470 [2024-12-07 05:46:41.451596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.451907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.451916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.470 qpair failed and we were unable to recover it. 00:31:38.470 [2024-12-07 05:46:41.452221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.452556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.452565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.470 qpair failed and we were unable to recover it. 00:31:38.470 [2024-12-07 05:46:41.452892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.453193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.453202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.470 qpair failed and we were unable to recover it. 00:31:38.470 [2024-12-07 05:46:41.453516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.453849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.453860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.470 qpair failed and we were unable to recover it. 00:31:38.470 [2024-12-07 05:46:41.454165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.454484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.454493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.470 qpair failed and we were unable to recover it. 00:31:38.470 [2024-12-07 05:46:41.454801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.455106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.455116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.470 qpair failed and we were unable to recover it. 00:31:38.470 [2024-12-07 05:46:41.455305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.455608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.455617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.470 qpair failed and we were unable to recover it. 00:31:38.470 [2024-12-07 05:46:41.455921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.456233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.456242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.470 qpair failed and we were unable to recover it. 00:31:38.470 [2024-12-07 05:46:41.456546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.456763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 [2024-12-07 05:46:41.456772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.470 qpair failed and we were unable to recover it. 00:31:38.470 [2024-12-07 05:46:41.456942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 2034626 Killed "${NVMF_APP[@]}" "$@" 00:31:38.470 05:46:41 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:31:38.470 05:46:41 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:38.470 05:46:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:38.470 05:46:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:38.470 05:46:41 -- common/autotest_common.sh@10 -- # set +x 00:31:38.470 05:46:41 -- nvmf/common.sh@469 -- # nvmfpid=2035653 00:31:38.470 05:46:41 -- nvmf/common.sh@470 -- # waitforlisten 2035653 00:31:38.471 05:46:41 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:38.471 05:46:41 -- common/autotest_common.sh@829 -- # '[' -z 2035653 ']' 00:31:38.471 05:46:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:38.471 05:46:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:38.471 05:46:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:38.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:38.471 05:46:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:38.471 05:46:41 -- common/autotest_common.sh@10 -- # set +x 00:31:38.471 [2024-12-07 05:46:41.599361] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:31:38.471 [2024-12-07 05:46:41.599422] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:38.471 [2024-12-07 05:46:41.627151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.627200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.471 qpair failed and we were unable to recover it. 00:31:38.471 [2024-12-07 05:46:41.627427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.627745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.627756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.471 qpair failed and we were unable to recover it. 00:31:38.471 [2024-12-07 05:46:41.627994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.628395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.628454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.471 qpair failed and we were unable to recover it. 00:31:38.471 [2024-12-07 05:46:41.628832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.629317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.629376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.471 qpair failed and we were unable to recover it. 00:31:38.471 [2024-12-07 05:46:41.629760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.629976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.629988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.471 qpair failed and we were unable to recover it. 00:31:38.471 [2024-12-07 05:46:41.630465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.630857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.630873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.471 qpair failed and we were unable to recover it. 00:31:38.471 [2024-12-07 05:46:41.631193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.631540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.631551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.471 qpair failed and we were unable to recover it. 00:31:38.471 [2024-12-07 05:46:41.631878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.632236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.632248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.471 qpair failed and we were unable to recover it. 00:31:38.471 [2024-12-07 05:46:41.632575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.632895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.632906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.471 qpair failed and we were unable to recover it. 00:31:38.471 [2024-12-07 05:46:41.633244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.633601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.633613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.471 qpair failed and we were unable to recover it. 00:31:38.471 [2024-12-07 05:46:41.633963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.634309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.634321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.471 qpair failed and we were unable to recover it. 00:31:38.471 [2024-12-07 05:46:41.634666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.635001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.635019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.471 qpair failed and we were unable to recover it. 00:31:38.471 [2024-12-07 05:46:41.635459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.635749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.635761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.471 qpair failed and we were unable to recover it. 00:31:38.471 [2024-12-07 05:46:41.635983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 EAL: No free 2048 kB hugepages reported on node 1 00:31:38.471 [2024-12-07 05:46:41.636248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.636264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.471 qpair failed and we were unable to recover it. 00:31:38.471 [2024-12-07 05:46:41.636586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.636926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.636937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.471 qpair failed and we were unable to recover it. 00:31:38.471 [2024-12-07 05:46:41.637146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.637490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.637502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.471 qpair failed and we were unable to recover it. 00:31:38.471 [2024-12-07 05:46:41.637827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.638143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.638155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.471 qpair failed and we were unable to recover it. 00:31:38.471 [2024-12-07 05:46:41.638505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.638840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.638852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.471 qpair failed and we were unable to recover it. 00:31:38.471 [2024-12-07 05:46:41.639179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.639491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.639502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.471 qpair failed and we were unable to recover it. 00:31:38.471 [2024-12-07 05:46:41.639914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.640235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.640246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.471 qpair failed and we were unable to recover it. 00:31:38.471 [2024-12-07 05:46:41.640647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.640961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.640972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.471 qpair failed and we were unable to recover it. 00:31:38.471 [2024-12-07 05:46:41.641326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.641659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.641670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.471 qpair failed and we were unable to recover it. 00:31:38.471 [2024-12-07 05:46:41.642003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.642376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.642386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.471 qpair failed and we were unable to recover it. 00:31:38.471 [2024-12-07 05:46:41.642716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.643020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.643031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.471 qpair failed and we were unable to recover it. 00:31:38.471 [2024-12-07 05:46:41.643426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.643752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.643763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.471 qpair failed and we were unable to recover it. 00:31:38.471 [2024-12-07 05:46:41.644155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.644479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.471 [2024-12-07 05:46:41.644490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.471 qpair failed and we were unable to recover it. 00:31:38.471 [2024-12-07 05:46:41.644865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.645190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.645201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.472 qpair failed and we were unable to recover it. 00:31:38.472 [2024-12-07 05:46:41.645526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.645858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.645869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.472 qpair failed and we were unable to recover it. 00:31:38.472 [2024-12-07 05:46:41.646187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.646515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.646525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.472 qpair failed and we were unable to recover it. 00:31:38.472 [2024-12-07 05:46:41.646887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.647220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.647231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.472 qpair failed and we were unable to recover it. 00:31:38.472 [2024-12-07 05:46:41.647580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.647905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.647916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.472 qpair failed and we were unable to recover it. 00:31:38.472 [2024-12-07 05:46:41.648269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.648582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.648593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.472 qpair failed and we were unable to recover it. 00:31:38.472 [2024-12-07 05:46:41.648922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.649282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.649293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.472 qpair failed and we were unable to recover it. 00:31:38.472 [2024-12-07 05:46:41.649608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.649928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.649939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.472 qpair failed and we were unable to recover it. 00:31:38.472 [2024-12-07 05:46:41.650247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.650569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.650580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.472 qpair failed and we were unable to recover it. 00:31:38.472 [2024-12-07 05:46:41.650924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.651312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.651323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.472 qpair failed and we were unable to recover it. 00:31:38.472 [2024-12-07 05:46:41.651647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.651838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.651851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.472 qpair failed and we were unable to recover it. 00:31:38.472 [2024-12-07 05:46:41.652179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.652485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.652497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.472 qpair failed and we were unable to recover it. 00:31:38.472 [2024-12-07 05:46:41.652824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.653118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.653129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.472 qpair failed and we were unable to recover it. 00:31:38.472 [2024-12-07 05:46:41.653454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.653782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.653793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.472 qpair failed and we were unable to recover it. 00:31:38.472 [2024-12-07 05:46:41.654023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.654198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.654210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.472 qpair failed and we were unable to recover it. 00:31:38.472 [2024-12-07 05:46:41.654531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.654859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.654872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.472 qpair failed and we were unable to recover it. 00:31:38.472 [2024-12-07 05:46:41.655191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.655417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.655428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.472 qpair failed and we were unable to recover it. 00:31:38.472 [2024-12-07 05:46:41.655682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.656006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.656026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.472 qpair failed and we were unable to recover it. 00:31:38.472 [2024-12-07 05:46:41.656235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.656563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.656575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.472 qpair failed and we were unable to recover it. 00:31:38.472 [2024-12-07 05:46:41.656937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.657235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.657247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.472 qpair failed and we were unable to recover it. 00:31:38.472 [2024-12-07 05:46:41.657562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.657895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.657907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.472 qpair failed and we were unable to recover it. 00:31:38.472 [2024-12-07 05:46:41.658157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.658492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.658504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.472 qpair failed and we were unable to recover it. 00:31:38.472 [2024-12-07 05:46:41.658826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.659153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.659165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.472 qpair failed and we were unable to recover it. 00:31:38.472 [2024-12-07 05:46:41.659515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.659898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.659911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.472 qpair failed and we were unable to recover it. 00:31:38.472 [2024-12-07 05:46:41.660224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.472 [2024-12-07 05:46:41.660463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.660475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.473 qpair failed and we were unable to recover it. 00:31:38.473 [2024-12-07 05:46:41.660710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.660881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.660896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.473 qpair failed and we were unable to recover it. 00:31:38.473 [2024-12-07 05:46:41.661202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.661420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.661432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.473 qpair failed and we were unable to recover it. 00:31:38.473 [2024-12-07 05:46:41.661613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.661961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.661972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.473 qpair failed and we were unable to recover it. 00:31:38.473 [2024-12-07 05:46:41.662162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.662354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.662366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.473 qpair failed and we were unable to recover it. 00:31:38.473 [2024-12-07 05:46:41.662671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.662757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.662768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.473 qpair failed and we were unable to recover it. 00:31:38.473 [2024-12-07 05:46:41.663096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.663424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.663436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.473 qpair failed and we were unable to recover it. 00:31:38.473 [2024-12-07 05:46:41.663669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.663986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.663998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.473 qpair failed and we were unable to recover it. 00:31:38.473 [2024-12-07 05:46:41.664326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.664657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.664669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.473 qpair failed and we were unable to recover it. 00:31:38.473 [2024-12-07 05:46:41.665006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.665346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.665357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.473 qpair failed and we were unable to recover it. 00:31:38.473 [2024-12-07 05:46:41.665723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.666057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.666069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.473 qpair failed and we were unable to recover it. 00:31:38.473 [2024-12-07 05:46:41.666309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.666641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.666655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.473 qpair failed and we were unable to recover it. 00:31:38.473 [2024-12-07 05:46:41.666951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.667284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.667296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.473 qpair failed and we were unable to recover it. 00:31:38.473 [2024-12-07 05:46:41.667647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.667980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.667993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.473 qpair failed and we were unable to recover it. 00:31:38.473 [2024-12-07 05:46:41.668209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.668498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.668511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.473 qpair failed and we were unable to recover it. 00:31:38.473 [2024-12-07 05:46:41.668862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.669196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.669207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.473 qpair failed and we were unable to recover it. 00:31:38.473 [2024-12-07 05:46:41.669552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.669631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.669643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.473 qpair failed and we were unable to recover it. 00:31:38.473 [2024-12-07 05:46:41.669837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.670075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.670089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.473 qpair failed and we were unable to recover it. 00:31:38.473 [2024-12-07 05:46:41.670400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.670607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.670619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.473 qpair failed and we were unable to recover it. 00:31:38.473 [2024-12-07 05:46:41.670827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.671184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.671197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.473 qpair failed and we were unable to recover it. 00:31:38.473 [2024-12-07 05:46:41.671415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.671729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.671742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.473 qpair failed and we were unable to recover it. 00:31:38.473 [2024-12-07 05:46:41.672101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.672375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.672387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.473 qpair failed and we were unable to recover it. 00:31:38.473 [2024-12-07 05:46:41.672613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.672970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.672981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.473 qpair failed and we were unable to recover it. 00:31:38.473 [2024-12-07 05:46:41.673312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.673635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.673647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.473 qpair failed and we were unable to recover it. 00:31:38.473 [2024-12-07 05:46:41.673978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.674309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.674322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.473 qpair failed and we were unable to recover it. 00:31:38.473 [2024-12-07 05:46:41.674652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.674969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.674981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.473 qpair failed and we were unable to recover it. 00:31:38.473 [2024-12-07 05:46:41.675206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.675425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.675437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.473 qpair failed and we were unable to recover it. 00:31:38.473 [2024-12-07 05:46:41.675660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.473 [2024-12-07 05:46:41.675936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.675948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.474 qpair failed and we were unable to recover it. 00:31:38.474 [2024-12-07 05:46:41.676268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.676466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.676479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.474 qpair failed and we were unable to recover it. 00:31:38.474 [2024-12-07 05:46:41.676807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.677128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.677140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.474 qpair failed and we were unable to recover it. 00:31:38.474 [2024-12-07 05:46:41.677477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.677654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.677666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.474 qpair failed and we were unable to recover it. 00:31:38.474 [2024-12-07 05:46:41.677884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.678174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.678187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.474 qpair failed and we were unable to recover it. 00:31:38.474 [2024-12-07 05:46:41.678523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.678848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.678859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.474 qpair failed and we were unable to recover it. 00:31:38.474 [2024-12-07 05:46:41.679210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.679425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.679436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.474 qpair failed and we were unable to recover it. 00:31:38.474 [2024-12-07 05:46:41.679506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.679790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.679803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.474 qpair failed and we were unable to recover it. 00:31:38.474 [2024-12-07 05:46:41.680113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.680442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.680454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.474 qpair failed and we were unable to recover it. 00:31:38.474 [2024-12-07 05:46:41.680857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.681065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.681077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.474 qpair failed and we were unable to recover it. 00:31:38.474 [2024-12-07 05:46:41.681418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.681746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.681760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.474 qpair failed and we were unable to recover it. 00:31:38.474 [2024-12-07 05:46:41.682103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.682468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.682481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.474 qpair failed and we were unable to recover it. 00:31:38.474 [2024-12-07 05:46:41.682782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.682973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.682985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.474 qpair failed and we were unable to recover it. 00:31:38.474 [2024-12-07 05:46:41.683330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.683686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.683698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.474 qpair failed and we were unable to recover it. 00:31:38.474 [2024-12-07 05:46:41.684068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.684385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.684397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.474 qpair failed and we were unable to recover it. 00:31:38.474 [2024-12-07 05:46:41.684724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.685050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.685061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.474 qpair failed and we were unable to recover it. 00:31:38.474 [2024-12-07 05:46:41.685388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.685716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.685729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.474 qpair failed and we were unable to recover it. 00:31:38.474 [2024-12-07 05:46:41.686053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.686330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.686342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.474 qpair failed and we were unable to recover it. 00:31:38.474 [2024-12-07 05:46:41.686663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.686993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.687005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.474 qpair failed and we were unable to recover it. 00:31:38.474 [2024-12-07 05:46:41.687206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.687564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.687576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.474 qpair failed and we were unable to recover it. 00:31:38.474 [2024-12-07 05:46:41.687918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.688236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.688248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.474 qpair failed and we were unable to recover it. 00:31:38.474 [2024-12-07 05:46:41.688587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.688913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.688925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.474 qpair failed and we were unable to recover it. 00:31:38.474 [2024-12-07 05:46:41.689273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.689591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.689603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.474 qpair failed and we were unable to recover it. 00:31:38.474 [2024-12-07 05:46:41.689926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.690120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.690133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.474 qpair failed and we were unable to recover it. 00:31:38.474 [2024-12-07 05:46:41.690479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.690675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.690690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.474 qpair failed and we were unable to recover it. 00:31:38.474 [2024-12-07 05:46:41.691029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.691406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.691424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.474 qpair failed and we were unable to recover it. 00:31:38.474 [2024-12-07 05:46:41.691755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.692104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.692116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.474 qpair failed and we were unable to recover it. 00:31:38.474 [2024-12-07 05:46:41.692433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.692785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.692796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.474 qpair failed and we were unable to recover it. 00:31:38.474 [2024-12-07 05:46:41.693147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.693353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.474 [2024-12-07 05:46:41.693365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.474 qpair failed and we were unable to recover it. 00:31:38.474 [2024-12-07 05:46:41.693715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.475 [2024-12-07 05:46:41.693781] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:38.475 [2024-12-07 05:46:41.694069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.475 [2024-12-07 05:46:41.694081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.475 qpair failed and we were unable to recover it. 00:31:38.475 [2024-12-07 05:46:41.694379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.475 [2024-12-07 05:46:41.694703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.475 [2024-12-07 05:46:41.694715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.475 qpair failed and we were unable to recover it. 00:31:38.475 [2024-12-07 05:46:41.695061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.475 [2024-12-07 05:46:41.695392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.475 [2024-12-07 05:46:41.695403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.475 qpair failed and we were unable to recover it. 00:31:38.475 [2024-12-07 05:46:41.695749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.475 [2024-12-07 05:46:41.696074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.475 [2024-12-07 05:46:41.696085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.475 qpair failed and we were unable to recover it. 00:31:38.475 [2024-12-07 05:46:41.696383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.475 [2024-12-07 05:46:41.696560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.475 [2024-12-07 05:46:41.696571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.475 qpair failed and we were unable to recover it. 00:31:38.475 [2024-12-07 05:46:41.696894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.475 [2024-12-07 05:46:41.697227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.475 [2024-12-07 05:46:41.697238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.475 qpair failed and we were unable to recover it. 00:31:38.475 [2024-12-07 05:46:41.697573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.475 [2024-12-07 05:46:41.697909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.475 [2024-12-07 05:46:41.697923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.475 qpair failed and we were unable to recover it. 00:31:38.475 [2024-12-07 05:46:41.698261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.475 [2024-12-07 05:46:41.698583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.475 [2024-12-07 05:46:41.698595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.475 qpair failed and we were unable to recover it. 00:31:38.475 [2024-12-07 05:46:41.698915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.475 [2024-12-07 05:46:41.699275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.475 [2024-12-07 05:46:41.699286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.475 qpair failed and we were unable to recover it. 00:31:38.475 [2024-12-07 05:46:41.699611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.475 [2024-12-07 05:46:41.699952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.475 [2024-12-07 05:46:41.699963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.475 qpair failed and we were unable to recover it. 00:31:38.475 [2024-12-07 05:46:41.700291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.475 [2024-12-07 05:46:41.700615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.475 [2024-12-07 05:46:41.700628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.475 qpair failed and we were unable to recover it. 00:31:38.475 [2024-12-07 05:46:41.700941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.475 [2024-12-07 05:46:41.701117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.475 [2024-12-07 05:46:41.701128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.475 qpair failed and we were unable to recover it. 00:31:38.475 [2024-12-07 05:46:41.701313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.475 [2024-12-07 05:46:41.701629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.475 [2024-12-07 05:46:41.701639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.475 qpair failed and we were unable to recover it. 00:31:38.745 [2024-12-07 05:46:41.701967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.745 [2024-12-07 05:46:41.702306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.745 [2024-12-07 05:46:41.702318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.745 qpair failed and we were unable to recover it. 00:31:38.745 [2024-12-07 05:46:41.702628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.745 [2024-12-07 05:46:41.702983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.745 [2024-12-07 05:46:41.702994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.745 qpair failed and we were unable to recover it. 00:31:38.745 [2024-12-07 05:46:41.703310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.745 [2024-12-07 05:46:41.703627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.745 [2024-12-07 05:46:41.703638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.745 qpair failed and we were unable to recover it. 00:31:38.745 [2024-12-07 05:46:41.703955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.745 [2024-12-07 05:46:41.704281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.745 [2024-12-07 05:46:41.704292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.745 qpair failed and we were unable to recover it. 00:31:38.745 [2024-12-07 05:46:41.704611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.745 [2024-12-07 05:46:41.704969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.745 [2024-12-07 05:46:41.704979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.745 qpair failed and we were unable to recover it. 00:31:38.745 [2024-12-07 05:46:41.705307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.705621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.705632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.746 qpair failed and we were unable to recover it. 00:31:38.746 [2024-12-07 05:46:41.705978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.706318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.706329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.746 qpair failed and we were unable to recover it. 00:31:38.746 [2024-12-07 05:46:41.706648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.706998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.707008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.746 qpair failed and we were unable to recover it. 00:31:38.746 [2024-12-07 05:46:41.707328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.707629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.707640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.746 qpair failed and we were unable to recover it. 00:31:38.746 [2024-12-07 05:46:41.707965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.708253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.708264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.746 qpair failed and we were unable to recover it. 00:31:38.746 [2024-12-07 05:46:41.708612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.708932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.708943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.746 qpair failed and we were unable to recover it. 00:31:38.746 [2024-12-07 05:46:41.709266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.709461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.709473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.746 qpair failed and we were unable to recover it. 00:31:38.746 [2024-12-07 05:46:41.709657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.709866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.709876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.746 qpair failed and we were unable to recover it. 00:31:38.746 [2024-12-07 05:46:41.710174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.710492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.710503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.746 qpair failed and we were unable to recover it. 00:31:38.746 [2024-12-07 05:46:41.710844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.711166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.711177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.746 qpair failed and we were unable to recover it. 00:31:38.746 [2024-12-07 05:46:41.711484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.711665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.711675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.746 qpair failed and we were unable to recover it. 00:31:38.746 [2024-12-07 05:46:41.711966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.712275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.712285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.746 qpair failed and we were unable to recover it. 00:31:38.746 [2024-12-07 05:46:41.712607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.712933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.712944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.746 qpair failed and we were unable to recover it. 00:31:38.746 [2024-12-07 05:46:41.713247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.713583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.713593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.746 qpair failed and we were unable to recover it. 00:31:38.746 [2024-12-07 05:46:41.713930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.714249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.714261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.746 qpair failed and we were unable to recover it. 00:31:38.746 [2024-12-07 05:46:41.714661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.714960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.714971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.746 qpair failed and we were unable to recover it. 00:31:38.746 [2024-12-07 05:46:41.715304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.715469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.715480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.746 qpair failed and we were unable to recover it. 00:31:38.746 [2024-12-07 05:46:41.715810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.716137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.716148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.746 qpair failed and we were unable to recover it. 00:31:38.746 [2024-12-07 05:46:41.716466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.716790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.716801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.746 qpair failed and we were unable to recover it. 00:31:38.746 [2024-12-07 05:46:41.717142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.717473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.717484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.746 qpair failed and we were unable to recover it. 00:31:38.746 [2024-12-07 05:46:41.717837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.718026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.718038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.746 qpair failed and we were unable to recover it. 00:31:38.746 [2024-12-07 05:46:41.718340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.718657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.718667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.746 qpair failed and we were unable to recover it. 00:31:38.746 [2024-12-07 05:46:41.718988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.719311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.719322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.746 qpair failed and we were unable to recover it. 00:31:38.746 [2024-12-07 05:46:41.719621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.719950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.719961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.746 qpair failed and we were unable to recover it. 00:31:38.746 [2024-12-07 05:46:41.720273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.720596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.720606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.746 qpair failed and we were unable to recover it. 00:31:38.746 [2024-12-07 05:46:41.720957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.721271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.721281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.746 qpair failed and we were unable to recover it. 00:31:38.746 [2024-12-07 05:46:41.721594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.721925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.721936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.746 qpair failed and we were unable to recover it. 00:31:38.746 [2024-12-07 05:46:41.722325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.722620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.722631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.746 qpair failed and we were unable to recover it. 00:31:38.746 [2024-12-07 05:46:41.722942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.746 [2024-12-07 05:46:41.723268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.723279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.747 qpair failed and we were unable to recover it. 00:31:38.747 [2024-12-07 05:46:41.723635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.723988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.724000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.747 qpair failed and we were unable to recover it. 00:31:38.747 [2024-12-07 05:46:41.724330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.724621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.724632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.747 qpair failed and we were unable to recover it. 00:31:38.747 [2024-12-07 05:46:41.724997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.725320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.725331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.747 qpair failed and we were unable to recover it. 00:31:38.747 [2024-12-07 05:46:41.725650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.725974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.725984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.747 qpair failed and we were unable to recover it. 00:31:38.747 [2024-12-07 05:46:41.726187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.726489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.726499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.747 qpair failed and we were unable to recover it. 00:31:38.747 [2024-12-07 05:46:41.726838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.727164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.727175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.747 qpair failed and we were unable to recover it. 00:31:38.747 [2024-12-07 05:46:41.727522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.727872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.727882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.747 qpair failed and we were unable to recover it. 00:31:38.747 [2024-12-07 05:46:41.728178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.728513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.728523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.747 qpair failed and we were unable to recover it. 00:31:38.747 [2024-12-07 05:46:41.728818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.729139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.729151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.747 qpair failed and we were unable to recover it. 00:31:38.747 [2024-12-07 05:46:41.729468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.729826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.729836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.747 qpair failed and we were unable to recover it. 00:31:38.747 [2024-12-07 05:46:41.730141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.730434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.730446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.747 qpair failed and we were unable to recover it. 00:31:38.747 [2024-12-07 05:46:41.730628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.730971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.730982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.747 qpair failed and we were unable to recover it. 00:31:38.747 [2024-12-07 05:46:41.731342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.731662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.731672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.747 qpair failed and we were unable to recover it. 00:31:38.747 [2024-12-07 05:46:41.731987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.732320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.732331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.747 qpair failed and we were unable to recover it. 00:31:38.747 [2024-12-07 05:46:41.732649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.732968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.732978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.747 qpair failed and we were unable to recover it. 00:31:38.747 [2024-12-07 05:46:41.733202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.733427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.733438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.747 qpair failed and we were unable to recover it. 00:31:38.747 [2024-12-07 05:46:41.733780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.734103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.734114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.747 qpair failed and we were unable to recover it. 00:31:38.747 [2024-12-07 05:46:41.734467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.734699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.734709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.747 qpair failed and we were unable to recover it. 00:31:38.747 [2024-12-07 05:46:41.734891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.735172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.735183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.747 qpair failed and we were unable to recover it. 00:31:38.747 [2024-12-07 05:46:41.735507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.735814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.735825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.747 qpair failed and we were unable to recover it. 00:31:38.747 [2024-12-07 05:46:41.736041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.736334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.736344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.747 qpair failed and we were unable to recover it. 00:31:38.747 [2024-12-07 05:46:41.736693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.737002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.737030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.747 qpair failed and we were unable to recover it. 00:31:38.747 [2024-12-07 05:46:41.737386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.737707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.737718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.747 qpair failed and we were unable to recover it. 00:31:38.747 [2024-12-07 05:46:41.738039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.738382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.738392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.747 qpair failed and we were unable to recover it. 00:31:38.747 [2024-12-07 05:46:41.738599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.738901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.738912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.747 qpair failed and we were unable to recover it. 00:31:38.747 [2024-12-07 05:46:41.739223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.739543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.739554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.747 qpair failed and we were unable to recover it. 00:31:38.747 [2024-12-07 05:46:41.739872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.740265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.740277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.747 qpair failed and we were unable to recover it. 00:31:38.747 [2024-12-07 05:46:41.740590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.740918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.747 [2024-12-07 05:46:41.740929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.747 qpair failed and we were unable to recover it. 00:31:38.747 [2024-12-07 05:46:41.741216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.741563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.741573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.748 qpair failed and we were unable to recover it. 00:31:38.748 [2024-12-07 05:46:41.741892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.742185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.742197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.748 qpair failed and we were unable to recover it. 00:31:38.748 [2024-12-07 05:46:41.742507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.742827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.742839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.748 qpair failed and we were unable to recover it. 00:31:38.748 [2024-12-07 05:46:41.743181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.743492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.743503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.748 qpair failed and we were unable to recover it. 00:31:38.748 [2024-12-07 05:46:41.743848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.744162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.744173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.748 qpair failed and we were unable to recover it. 00:31:38.748 [2024-12-07 05:46:41.744523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.744828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.744839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.748 qpair failed and we were unable to recover it. 00:31:38.748 [2024-12-07 05:46:41.745220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.745545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.745555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.748 qpair failed and we were unable to recover it. 00:31:38.748 [2024-12-07 05:46:41.745859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.746194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.746205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.748 qpair failed and we were unable to recover it. 00:31:38.748 [2024-12-07 05:46:41.746444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.746647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.746658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.748 qpair failed and we were unable to recover it. 00:31:38.748 [2024-12-07 05:46:41.746987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.747316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.747327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.748 qpair failed and we were unable to recover it. 00:31:38.748 [2024-12-07 05:46:41.747645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.748013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.748024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.748 qpair failed and we were unable to recover it. 00:31:38.748 [2024-12-07 05:46:41.748365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.748576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.748586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.748 qpair failed and we were unable to recover it. 00:31:38.748 [2024-12-07 05:46:41.748927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.749213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.749224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.748 qpair failed and we were unable to recover it. 00:31:38.748 [2024-12-07 05:46:41.749552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.749907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.749917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.748 qpair failed and we were unable to recover it. 00:31:38.748 [2024-12-07 05:46:41.750260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.750583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.750594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.748 qpair failed and we were unable to recover it. 00:31:38.748 [2024-12-07 05:46:41.750966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.751346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.751357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.748 qpair failed and we were unable to recover it. 00:31:38.748 [2024-12-07 05:46:41.751692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.752016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.752027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.748 qpair failed and we were unable to recover it. 00:31:38.748 [2024-12-07 05:46:41.752383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.752740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.752750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.748 qpair failed and we were unable to recover it. 00:31:38.748 [2024-12-07 05:46:41.753064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.753374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.753384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.748 qpair failed and we were unable to recover it. 00:31:38.748 [2024-12-07 05:46:41.753553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.753903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.753913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.748 qpair failed and we were unable to recover it. 00:31:38.748 [2024-12-07 05:46:41.754228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.754552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.754562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.748 qpair failed and we were unable to recover it. 00:31:38.748 [2024-12-07 05:46:41.754877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.755192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.755202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.748 qpair failed and we were unable to recover it. 00:31:38.748 [2024-12-07 05:46:41.755537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.755856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.755867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.748 qpair failed and we were unable to recover it. 00:31:38.748 [2024-12-07 05:46:41.756168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.756362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.756377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.748 qpair failed and we were unable to recover it. 00:31:38.748 [2024-12-07 05:46:41.756775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.756961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.756971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.748 qpair failed and we were unable to recover it. 00:31:38.748 [2024-12-07 05:46:41.757113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.757415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.757427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.748 qpair failed and we were unable to recover it. 00:31:38.748 [2024-12-07 05:46:41.757620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.757828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.757838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.748 qpair failed and we were unable to recover it. 00:31:38.748 [2024-12-07 05:46:41.758164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.758490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.758501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.748 qpair failed and we were unable to recover it. 00:31:38.748 [2024-12-07 05:46:41.758702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.759020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.748 [2024-12-07 05:46:41.759031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.749 qpair failed and we were unable to recover it. 00:31:38.749 [2024-12-07 05:46:41.759242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.759547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.759557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.749 qpair failed and we were unable to recover it. 00:31:38.749 [2024-12-07 05:46:41.759896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.760191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.760202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.749 qpair failed and we were unable to recover it. 00:31:38.749 [2024-12-07 05:46:41.760489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.760815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.760827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.749 qpair failed and we were unable to recover it. 00:31:38.749 [2024-12-07 05:46:41.761154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.761483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.761493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.749 qpair failed and we were unable to recover it. 00:31:38.749 [2024-12-07 05:46:41.761800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.762119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.762131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.749 qpair failed and we were unable to recover it. 00:31:38.749 [2024-12-07 05:46:41.762468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.762784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.762794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.749 qpair failed and we were unable to recover it. 00:31:38.749 [2024-12-07 05:46:41.763107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.763410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.763421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.749 qpair failed and we were unable to recover it. 00:31:38.749 [2024-12-07 05:46:41.763627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.763946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.763956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.749 qpair failed and we were unable to recover it. 00:31:38.749 [2024-12-07 05:46:41.764272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.764594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.764604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.749 qpair failed and we were unable to recover it. 00:31:38.749 [2024-12-07 05:46:41.764944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.765033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.765043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.749 qpair failed and we were unable to recover it. 00:31:38.749 [2024-12-07 05:46:41.765325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.765612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.765623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.749 qpair failed and we were unable to recover it. 00:31:38.749 [2024-12-07 05:46:41.765950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.766270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.766281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.749 qpair failed and we were unable to recover it. 00:31:38.749 [2024-12-07 05:46:41.766581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.766895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.766906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.749 qpair failed and we were unable to recover it. 00:31:38.749 [2024-12-07 05:46:41.767241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.767558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.767569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.749 qpair failed and we were unable to recover it. 00:31:38.749 [2024-12-07 05:46:41.767877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.768171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.768181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.749 qpair failed and we were unable to recover it. 00:31:38.749 [2024-12-07 05:46:41.768502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.768841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.768851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.749 qpair failed and we were unable to recover it. 00:31:38.749 [2024-12-07 05:46:41.769180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.769471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.769481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.749 qpair failed and we were unable to recover it. 00:31:38.749 [2024-12-07 05:46:41.769795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.770145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.770156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.749 qpair failed and we were unable to recover it. 00:31:38.749 [2024-12-07 05:46:41.770489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.770810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.770821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.749 qpair failed and we were unable to recover it. 00:31:38.749 [2024-12-07 05:46:41.771138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.771486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.771496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.749 qpair failed and we were unable to recover it. 00:31:38.749 [2024-12-07 05:46:41.771812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.771999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.772009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.749 qpair failed and we were unable to recover it. 00:31:38.749 [2024-12-07 05:46:41.772365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.772665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.772675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.749 qpair failed and we were unable to recover it. 00:31:38.749 [2024-12-07 05:46:41.772868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.773124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.773135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.749 qpair failed and we were unable to recover it. 00:31:38.749 [2024-12-07 05:46:41.773425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.773760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.773770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.749 qpair failed and we were unable to recover it. 00:31:38.749 [2024-12-07 05:46:41.774084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.774271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.774282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.749 qpair failed and we were unable to recover it. 00:31:38.749 [2024-12-07 05:46:41.774482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.749 [2024-12-07 05:46:41.774652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.750 [2024-12-07 05:46:41.774663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.750 qpair failed and we were unable to recover it. 00:31:38.750 [2024-12-07 05:46:41.774998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.750 [2024-12-07 05:46:41.775338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.750 [2024-12-07 05:46:41.775349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.750 qpair failed and we were unable to recover it. 00:31:38.750 [2024-12-07 05:46:41.775691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.750 [2024-12-07 05:46:41.776008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.750 [2024-12-07 05:46:41.776024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.750 qpair failed and we were unable to recover it. 00:31:38.750 [2024-12-07 05:46:41.776341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.750 [2024-12-07 05:46:41.776669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.750 [2024-12-07 05:46:41.776679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.750 qpair failed and we were unable to recover it. 00:31:38.750 [2024-12-07 05:46:41.776996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.750 [2024-12-07 05:46:41.777390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.750 [2024-12-07 05:46:41.777401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.750 qpair failed and we were unable to recover it. 00:31:38.750 [2024-12-07 05:46:41.777713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.750 [2024-12-07 05:46:41.778034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.750 [2024-12-07 05:46:41.778045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.750 qpair failed and we were unable to recover it. 00:31:38.750 [2024-12-07 05:46:41.778292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.750 [2024-12-07 05:46:41.778586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.750 [2024-12-07 05:46:41.778596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.750 qpair failed and we were unable to recover it. 00:31:38.750 [2024-12-07 05:46:41.778916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.750 [2024-12-07 05:46:41.779235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.750 [2024-12-07 05:46:41.779246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.750 qpair failed and we were unable to recover it. 00:31:38.750 [2024-12-07 05:46:41.779292] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:38.750 [2024-12-07 05:46:41.779449] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:38.750 [2024-12-07 05:46:41.779461] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:38.750 [2024-12-07 05:46:41.779473] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:38.750 [2024-12-07 05:46:41.779509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.750 [2024-12-07 05:46:41.779633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:31:38.750 [2024-12-07 05:46:41.779844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.750 [2024-12-07 05:46:41.779855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.750 qpair failed and we were unable to recover it. 00:31:38.750 [2024-12-07 05:46:41.779788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:31:38.750 [2024-12-07 05:46:41.779848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:31:38.750 [2024-12-07 05:46:41.780218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.750 [2024-12-07 05:46:41.780529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.750 [2024-12-07 05:46:41.780539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.750 qpair failed and we were unable to recover it. 00:31:38.750 [2024-12-07 05:46:41.780775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.750 [2024-12-07 05:46:41.781092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.750 [2024-12-07 05:46:41.781102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.750 qpair failed and we were unable to recover it. 00:31:38.750 [2024-12-07 05:46:41.781313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.750 [2024-12-07 05:46:41.781655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.750 [2024-12-07 05:46:41.781665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.750 qpair failed and we were unable to recover it. 00:31:38.750 [2024-12-07 05:46:41.781998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.750 [2024-12-07 05:46:41.782346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.750 [2024-12-07 05:46:41.782356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.750 qpair failed and we were unable to recover it. 00:31:38.751 [2024-12-07 05:46:41.782512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.782814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.782824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.751 qpair failed and we were unable to recover it. 00:31:38.751 [2024-12-07 05:46:41.779849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:31:38.751 [2024-12-07 05:46:41.783147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.783469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.783480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.751 qpair failed and we were unable to recover it. 00:31:38.751 [2024-12-07 05:46:41.783795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.783966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.783977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.751 qpair failed and we were unable to recover it. 00:31:38.751 [2024-12-07 05:46:41.784269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.784565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.784575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.751 qpair failed and we were unable to recover it. 00:31:38.751 [2024-12-07 05:46:41.784752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.784917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.784927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.751 qpair failed and we were unable to recover it. 00:31:38.751 [2024-12-07 05:46:41.785214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.785541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.785552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.751 qpair failed and we were unable to recover it. 00:31:38.751 [2024-12-07 05:46:41.785869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.786187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.786198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.751 qpair failed and we were unable to recover it. 00:31:38.751 [2024-12-07 05:46:41.786560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.786853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.786862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.751 qpair failed and we were unable to recover it. 00:31:38.751 [2024-12-07 05:46:41.787216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.787538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.787548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.751 qpair failed and we were unable to recover it. 00:31:38.751 [2024-12-07 05:46:41.787896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.788109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.788119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.751 qpair failed and we were unable to recover it. 00:31:38.751 [2024-12-07 05:46:41.788449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.788771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.788781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.751 qpair failed and we were unable to recover it. 00:31:38.751 [2024-12-07 05:46:41.789128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.789472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.789481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.751 qpair failed and we were unable to recover it. 00:31:38.751 [2024-12-07 05:46:41.789795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.790142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.790152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.751 qpair failed and we were unable to recover it. 00:31:38.751 [2024-12-07 05:46:41.790249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.790517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.790527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.751 qpair failed and we were unable to recover it. 00:31:38.751 [2024-12-07 05:46:41.790724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.791051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.791060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.751 qpair failed and we were unable to recover it. 00:31:38.751 [2024-12-07 05:46:41.791347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.791420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.791433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.751 qpair failed and we were unable to recover it. 00:31:38.751 [2024-12-07 05:46:41.791651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.791929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.791939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.751 qpair failed and we were unable to recover it. 00:31:38.751 [2024-12-07 05:46:41.792261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.792585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.792595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.751 qpair failed and we were unable to recover it. 00:31:38.751 [2024-12-07 05:46:41.792900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.793061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.751 [2024-12-07 05:46:41.793072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.751 qpair failed and we were unable to recover it. 00:31:38.751 [2024-12-07 05:46:41.793350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.793666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.793676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.752 qpair failed and we were unable to recover it. 00:31:38.752 [2024-12-07 05:46:41.793993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.794179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.794190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.752 qpair failed and we were unable to recover it. 00:31:38.752 [2024-12-07 05:46:41.794527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.794700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.794710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.752 qpair failed and we were unable to recover it. 00:31:38.752 [2024-12-07 05:46:41.795022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.795355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.795366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.752 qpair failed and we were unable to recover it. 00:31:38.752 [2024-12-07 05:46:41.795683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.796014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.796024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.752 qpair failed and we were unable to recover it. 00:31:38.752 [2024-12-07 05:46:41.796332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.796621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.796631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.752 qpair failed and we were unable to recover it. 00:31:38.752 [2024-12-07 05:46:41.796875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.797189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.797200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.752 qpair failed and we were unable to recover it. 00:31:38.752 [2024-12-07 05:46:41.797400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.797570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.797580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.752 qpair failed and we were unable to recover it. 00:31:38.752 [2024-12-07 05:46:41.797734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.797902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.797912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.752 qpair failed and we were unable to recover it. 00:31:38.752 [2024-12-07 05:46:41.798238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.798552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.798563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.752 qpair failed and we were unable to recover it. 00:31:38.752 [2024-12-07 05:46:41.798768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.799103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.799114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.752 qpair failed and we were unable to recover it. 00:31:38.752 [2024-12-07 05:46:41.799423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.799716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.799726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.752 qpair failed and we were unable to recover it. 00:31:38.752 [2024-12-07 05:46:41.800092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.800398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.800408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.752 qpair failed and we were unable to recover it. 00:31:38.752 [2024-12-07 05:46:41.800728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.801086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.801097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.752 qpair failed and we were unable to recover it. 00:31:38.752 [2024-12-07 05:46:41.801402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.801586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.801597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.752 qpair failed and we were unable to recover it. 00:31:38.752 [2024-12-07 05:46:41.801930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.802114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.802125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.752 qpair failed and we were unable to recover it. 00:31:38.752 [2024-12-07 05:46:41.802300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.802574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.802583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.752 qpair failed and we were unable to recover it. 00:31:38.752 [2024-12-07 05:46:41.802902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.803216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.803227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.752 qpair failed and we were unable to recover it. 00:31:38.752 [2024-12-07 05:46:41.803405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.752 [2024-12-07 05:46:41.803728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.803738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.753 qpair failed and we were unable to recover it. 00:31:38.753 [2024-12-07 05:46:41.804057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.804245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.804255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.753 qpair failed and we were unable to recover it. 00:31:38.753 [2024-12-07 05:46:41.804449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.804782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.804793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.753 qpair failed and we were unable to recover it. 00:31:38.753 [2024-12-07 05:46:41.805136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.805520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.805531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.753 qpair failed and we were unable to recover it. 00:31:38.753 [2024-12-07 05:46:41.805872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.805919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.805928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.753 qpair failed and we were unable to recover it. 00:31:38.753 [2024-12-07 05:46:41.806255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.806454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.806465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.753 qpair failed and we were unable to recover it. 00:31:38.753 [2024-12-07 05:46:41.806787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.807112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.807123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.753 qpair failed and we were unable to recover it. 00:31:38.753 [2024-12-07 05:46:41.807441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.807763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.807774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.753 qpair failed and we were unable to recover it. 00:31:38.753 [2024-12-07 05:46:41.808110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.808426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.808436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.753 qpair failed and we were unable to recover it. 00:31:38.753 [2024-12-07 05:46:41.808747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.809067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.809078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.753 qpair failed and we were unable to recover it. 00:31:38.753 [2024-12-07 05:46:41.809398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.809745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.809755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.753 qpair failed and we were unable to recover it. 00:31:38.753 [2024-12-07 05:46:41.810019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.810226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.810236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.753 qpair failed and we were unable to recover it. 00:31:38.753 [2024-12-07 05:46:41.810471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.810750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.810761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.753 qpair failed and we were unable to recover it. 00:31:38.753 [2024-12-07 05:46:41.811085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.811435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.811445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.753 qpair failed and we were unable to recover it. 00:31:38.753 [2024-12-07 05:46:41.811753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.812085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.812095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.753 qpair failed and we were unable to recover it. 00:31:38.753 [2024-12-07 05:46:41.812258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.812572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.812583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.753 qpair failed and we were unable to recover it. 00:31:38.753 [2024-12-07 05:46:41.812884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.813114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.813124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.753 qpair failed and we were unable to recover it. 00:31:38.753 [2024-12-07 05:46:41.813462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.813767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.813777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.753 qpair failed and we were unable to recover it. 00:31:38.753 [2024-12-07 05:46:41.813952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.814238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.753 [2024-12-07 05:46:41.814249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.753 qpair failed and we were unable to recover it. 00:31:38.753 [2024-12-07 05:46:41.814430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.814626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.814638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.754 qpair failed and we were unable to recover it. 00:31:38.754 [2024-12-07 05:46:41.814973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.815289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.815300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.754 qpair failed and we were unable to recover it. 00:31:38.754 [2024-12-07 05:46:41.815616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.815944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.815954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.754 qpair failed and we were unable to recover it. 00:31:38.754 [2024-12-07 05:46:41.816345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.816687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.816697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.754 qpair failed and we were unable to recover it. 00:31:38.754 [2024-12-07 05:46:41.817017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.817229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.817240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.754 qpair failed and we were unable to recover it. 00:31:38.754 [2024-12-07 05:46:41.817454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.817780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.817790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.754 qpair failed and we were unable to recover it. 00:31:38.754 [2024-12-07 05:46:41.818099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.818418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.818428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.754 qpair failed and we were unable to recover it. 00:31:38.754 [2024-12-07 05:46:41.818715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.819034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.819045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.754 qpair failed and we were unable to recover it. 00:31:38.754 [2024-12-07 05:46:41.819381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.819687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.819697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.754 qpair failed and we were unable to recover it. 00:31:38.754 [2024-12-07 05:46:41.819991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.820303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.820313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.754 qpair failed and we were unable to recover it. 00:31:38.754 [2024-12-07 05:46:41.820612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.820934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.820944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.754 qpair failed and we were unable to recover it. 00:31:38.754 [2024-12-07 05:46:41.821348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.821647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.821657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.754 qpair failed and we were unable to recover it. 00:31:38.754 [2024-12-07 05:46:41.821851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.822185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.822196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.754 qpair failed and we were unable to recover it. 00:31:38.754 [2024-12-07 05:46:41.822508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.822671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.822681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.754 qpair failed and we were unable to recover it. 00:31:38.754 [2024-12-07 05:46:41.822915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.823229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.823239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.754 qpair failed and we were unable to recover it. 00:31:38.754 [2024-12-07 05:46:41.823425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.823754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.823765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.754 qpair failed and we were unable to recover it. 00:31:38.754 [2024-12-07 05:46:41.823944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.824286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.824297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.754 qpair failed and we were unable to recover it. 00:31:38.754 [2024-12-07 05:46:41.824633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.824809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.824820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.754 qpair failed and we were unable to recover it. 00:31:38.754 [2024-12-07 05:46:41.825147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.825471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.825481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.754 qpair failed and we were unable to recover it. 00:31:38.754 [2024-12-07 05:46:41.825668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.825993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.754 [2024-12-07 05:46:41.826003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.755 qpair failed and we were unable to recover it. 00:31:38.755 [2024-12-07 05:46:41.826355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.826669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.826678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.755 qpair failed and we were unable to recover it. 00:31:38.755 [2024-12-07 05:46:41.826854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.827172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.827182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.755 qpair failed and we were unable to recover it. 00:31:38.755 [2024-12-07 05:46:41.827499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.827810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.827820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.755 qpair failed and we were unable to recover it. 00:31:38.755 [2024-12-07 05:46:41.828133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.828455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.828465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.755 qpair failed and we were unable to recover it. 00:31:38.755 [2024-12-07 05:46:41.828653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.828828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.828838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.755 qpair failed and we were unable to recover it. 00:31:38.755 [2024-12-07 05:46:41.829128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.829300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.829310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.755 qpair failed and we were unable to recover it. 00:31:38.755 [2024-12-07 05:46:41.829502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.829824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.829833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.755 qpair failed and we were unable to recover it. 00:31:38.755 [2024-12-07 05:46:41.830141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.830469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.830479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.755 qpair failed and we were unable to recover it. 00:31:38.755 [2024-12-07 05:46:41.830772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.831105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.831115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.755 qpair failed and we were unable to recover it. 00:31:38.755 [2024-12-07 05:46:41.831465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.831643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.831653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.755 qpair failed and we were unable to recover it. 00:31:38.755 [2024-12-07 05:46:41.831865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.832064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.832075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.755 qpair failed and we were unable to recover it. 00:31:38.755 [2024-12-07 05:46:41.832410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.832570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.832580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.755 qpair failed and we were unable to recover it. 00:31:38.755 [2024-12-07 05:46:41.832872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.833062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.833072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.755 qpair failed and we were unable to recover it. 00:31:38.755 [2024-12-07 05:46:41.833380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.833724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.833734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.755 qpair failed and we were unable to recover it. 00:31:38.755 [2024-12-07 05:46:41.833788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.833949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.833959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.755 qpair failed and we were unable to recover it. 00:31:38.755 [2024-12-07 05:46:41.834144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.834425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.834435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.755 qpair failed and we were unable to recover it. 00:31:38.755 [2024-12-07 05:46:41.834746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.834942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.834952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.755 qpair failed and we were unable to recover it. 00:31:38.755 [2024-12-07 05:46:41.835254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.835298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.835308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.755 qpair failed and we were unable to recover it. 00:31:38.755 [2024-12-07 05:46:41.835575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.835921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.755 [2024-12-07 05:46:41.835932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.755 qpair failed and we were unable to recover it. 00:31:38.756 [2024-12-07 05:46:41.836111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.836392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.836402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.756 qpair failed and we were unable to recover it. 00:31:38.756 [2024-12-07 05:46:41.836722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.837018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.837028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.756 qpair failed and we were unable to recover it. 00:31:38.756 [2024-12-07 05:46:41.837345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.837630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.837640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.756 qpair failed and we were unable to recover it. 00:31:38.756 [2024-12-07 05:46:41.837830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.838005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.838033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.756 qpair failed and we were unable to recover it. 00:31:38.756 [2024-12-07 05:46:41.838358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.838528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.838538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.756 qpair failed and we were unable to recover it. 00:31:38.756 [2024-12-07 05:46:41.838850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.839038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.839049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.756 qpair failed and we were unable to recover it. 00:31:38.756 [2024-12-07 05:46:41.839227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.839568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.839578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.756 qpair failed and we were unable to recover it. 00:31:38.756 [2024-12-07 05:46:41.839901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.840238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.840248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.756 qpair failed and we were unable to recover it. 00:31:38.756 [2024-12-07 05:46:41.840563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.840843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.840853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.756 qpair failed and we were unable to recover it. 00:31:38.756 [2024-12-07 05:46:41.841065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.841381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.841391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.756 qpair failed and we were unable to recover it. 00:31:38.756 [2024-12-07 05:46:41.841726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.842082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.842092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.756 qpair failed and we were unable to recover it. 00:31:38.756 [2024-12-07 05:46:41.842406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.842579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.842589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.756 qpair failed and we were unable to recover it. 00:31:38.756 [2024-12-07 05:46:41.842977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.843337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.843350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.756 qpair failed and we were unable to recover it. 00:31:38.756 [2024-12-07 05:46:41.843666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.843965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.843974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.756 qpair failed and we were unable to recover it. 00:31:38.756 [2024-12-07 05:46:41.844279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.844484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.844493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.756 qpair failed and we were unable to recover it. 00:31:38.756 [2024-12-07 05:46:41.844804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.845116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.845126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.756 qpair failed and we were unable to recover it. 00:31:38.756 [2024-12-07 05:46:41.845462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.845653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.845662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.756 qpair failed and we were unable to recover it. 00:31:38.756 [2024-12-07 05:46:41.845848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.846055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.846066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.756 qpair failed and we were unable to recover it. 00:31:38.756 [2024-12-07 05:46:41.846246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.846465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.846475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.756 qpair failed and we were unable to recover it. 00:31:38.756 [2024-12-07 05:46:41.846644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.846939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.846949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.756 qpair failed and we were unable to recover it. 00:31:38.756 [2024-12-07 05:46:41.847263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.847626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.847636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.756 qpair failed and we were unable to recover it. 00:31:38.756 [2024-12-07 05:46:41.847942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.848232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.848242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.756 qpair failed and we were unable to recover it. 00:31:38.756 [2024-12-07 05:46:41.848569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.848740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.848750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.756 qpair failed and we were unable to recover it. 00:31:38.756 [2024-12-07 05:46:41.848929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.849216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.756 [2024-12-07 05:46:41.849226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.756 qpair failed and we were unable to recover it. 00:31:38.757 [2024-12-07 05:46:41.849477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.849810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.849820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.757 qpair failed and we were unable to recover it. 00:31:38.757 [2024-12-07 05:46:41.850149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.850486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.850495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.757 qpair failed and we were unable to recover it. 00:31:38.757 [2024-12-07 05:46:41.850671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.851021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.851030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.757 qpair failed and we were unable to recover it. 00:31:38.757 [2024-12-07 05:46:41.851316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.851637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.851647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.757 qpair failed and we were unable to recover it. 00:31:38.757 [2024-12-07 05:46:41.851956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.852261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.852271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.757 qpair failed and we were unable to recover it. 00:31:38.757 [2024-12-07 05:46:41.852561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.852867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.852876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.757 qpair failed and we were unable to recover it. 00:31:38.757 [2024-12-07 05:46:41.853182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.853323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.853332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.757 qpair failed and we were unable to recover it. 00:31:38.757 [2024-12-07 05:46:41.853498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.853645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.853654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.757 qpair failed and we were unable to recover it. 00:31:38.757 [2024-12-07 05:46:41.853970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.854265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.854275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.757 qpair failed and we were unable to recover it. 00:31:38.757 [2024-12-07 05:46:41.854599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.854768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.854777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.757 qpair failed and we were unable to recover it. 00:31:38.757 [2024-12-07 05:46:41.854947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.855294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.855304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.757 qpair failed and we were unable to recover it. 00:31:38.757 [2024-12-07 05:46:41.855498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.855782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.855792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.757 qpair failed and we were unable to recover it. 00:31:38.757 [2024-12-07 05:46:41.855981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.856311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.856322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.757 qpair failed and we were unable to recover it. 00:31:38.757 [2024-12-07 05:46:41.856639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.856972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.856982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.757 qpair failed and we were unable to recover it. 00:31:38.757 [2024-12-07 05:46:41.857300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.857638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.857647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.757 qpair failed and we were unable to recover it. 00:31:38.757 [2024-12-07 05:46:41.857939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.858092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.858102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.757 qpair failed and we were unable to recover it. 00:31:38.757 [2024-12-07 05:46:41.858470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.858815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.858825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.757 qpair failed and we were unable to recover it. 00:31:38.757 [2024-12-07 05:46:41.859180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.859498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.859508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.757 qpair failed and we were unable to recover it. 00:31:38.757 [2024-12-07 05:46:41.859821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.860104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.860115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.757 qpair failed and we were unable to recover it. 00:31:38.757 [2024-12-07 05:46:41.860435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.860756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.860766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.757 qpair failed and we were unable to recover it. 00:31:38.757 [2024-12-07 05:46:41.860807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.861128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.861138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.757 qpair failed and we were unable to recover it. 00:31:38.757 [2024-12-07 05:46:41.861332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.861516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.757 [2024-12-07 05:46:41.861527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.758 qpair failed and we were unable to recover it. 00:31:38.758 [2024-12-07 05:46:41.861593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.861887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.861897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.758 qpair failed and we were unable to recover it. 00:31:38.758 [2024-12-07 05:46:41.862179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.862519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.862528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.758 qpair failed and we were unable to recover it. 00:31:38.758 [2024-12-07 05:46:41.862842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.862892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.862901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.758 qpair failed and we were unable to recover it. 00:31:38.758 [2024-12-07 05:46:41.863165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.863336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.863346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.758 qpair failed and we were unable to recover it. 00:31:38.758 [2024-12-07 05:46:41.863526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.863873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.863883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.758 qpair failed and we were unable to recover it. 00:31:38.758 [2024-12-07 05:46:41.864069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.864364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.864374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.758 qpair failed and we were unable to recover it. 00:31:38.758 [2024-12-07 05:46:41.864690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.865002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.865015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.758 qpair failed and we were unable to recover it. 00:31:38.758 [2024-12-07 05:46:41.865294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.865431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.865442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.758 qpair failed and we were unable to recover it. 00:31:38.758 [2024-12-07 05:46:41.865785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.865830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.865840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.758 qpair failed and we were unable to recover it. 00:31:38.758 [2024-12-07 05:46:41.866025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.866334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.866345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.758 qpair failed and we were unable to recover it. 00:31:38.758 [2024-12-07 05:46:41.866514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.866833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.866843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.758 qpair failed and we were unable to recover it. 00:31:38.758 [2024-12-07 05:46:41.867160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.867484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.867494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.758 qpair failed and we were unable to recover it. 00:31:38.758 [2024-12-07 05:46:41.867789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.868199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.868209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.758 qpair failed and we were unable to recover it. 00:31:38.758 [2024-12-07 05:46:41.868372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.868572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.868582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.758 qpair failed and we were unable to recover it. 00:31:38.758 [2024-12-07 05:46:41.868899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.869232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.869242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.758 qpair failed and we were unable to recover it. 00:31:38.758 [2024-12-07 05:46:41.869553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.869884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.869893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.758 qpair failed and we were unable to recover it. 00:31:38.758 [2024-12-07 05:46:41.870081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.870389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.870398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.758 qpair failed and we were unable to recover it. 00:31:38.758 [2024-12-07 05:46:41.870670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.870970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.870985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.758 qpair failed and we were unable to recover it. 00:31:38.758 [2024-12-07 05:46:41.871204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.871508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.871518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.758 qpair failed and we were unable to recover it. 00:31:38.758 [2024-12-07 05:46:41.871834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.872008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.872023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.758 qpair failed and we were unable to recover it. 00:31:38.758 [2024-12-07 05:46:41.872317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.872640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.872650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.758 qpair failed and we were unable to recover it. 00:31:38.758 [2024-12-07 05:46:41.872965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.873154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.873165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.758 qpair failed and we were unable to recover it. 00:31:38.758 [2024-12-07 05:46:41.873484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.873806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.873816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.758 qpair failed and we were unable to recover it. 00:31:38.758 [2024-12-07 05:46:41.874127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.874462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.874472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.758 qpair failed and we were unable to recover it. 00:31:38.758 [2024-12-07 05:46:41.874805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.874982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.874992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.758 qpair failed and we were unable to recover it. 00:31:38.758 [2024-12-07 05:46:41.875292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.875636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.875646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.758 qpair failed and we were unable to recover it. 00:31:38.758 [2024-12-07 05:46:41.876021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.876339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.876349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.758 qpair failed and we were unable to recover it. 00:31:38.758 [2024-12-07 05:46:41.876668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.876973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.758 [2024-12-07 05:46:41.876983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.758 qpair failed and we were unable to recover it. 00:31:38.758 [2024-12-07 05:46:41.877038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.877229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.877239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.759 qpair failed and we were unable to recover it. 00:31:38.759 [2024-12-07 05:46:41.877554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.877870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.877881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.759 qpair failed and we were unable to recover it. 00:31:38.759 [2024-12-07 05:46:41.878184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.878512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.878522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.759 qpair failed and we were unable to recover it. 00:31:38.759 [2024-12-07 05:46:41.878836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.878997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.879006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.759 qpair failed and we were unable to recover it. 00:31:38.759 [2024-12-07 05:46:41.879310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.879596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.879606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.759 qpair failed and we were unable to recover it. 00:31:38.759 [2024-12-07 05:46:41.879919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.880213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.880224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.759 qpair failed and we were unable to recover it. 00:31:38.759 [2024-12-07 05:46:41.880535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.880574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.880583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.759 qpair failed and we were unable to recover it. 00:31:38.759 [2024-12-07 05:46:41.880855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.881038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.881048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.759 qpair failed and we were unable to recover it. 00:31:38.759 [2024-12-07 05:46:41.881351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.881509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.881519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.759 qpair failed and we were unable to recover it. 00:31:38.759 [2024-12-07 05:46:41.881851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.882116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.882126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.759 qpair failed and we were unable to recover it. 00:31:38.759 [2024-12-07 05:46:41.882222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.882503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.882513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.759 qpair failed and we were unable to recover it. 00:31:38.759 [2024-12-07 05:46:41.882673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.882991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.883001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.759 qpair failed and we were unable to recover it. 00:31:38.759 [2024-12-07 05:46:41.883337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.883658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.883668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.759 qpair failed and we were unable to recover it. 00:31:38.759 [2024-12-07 05:46:41.883980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.884303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.884313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.759 qpair failed and we were unable to recover it. 00:31:38.759 [2024-12-07 05:46:41.884625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.884944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.884954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.759 qpair failed and we were unable to recover it. 00:31:38.759 [2024-12-07 05:46:41.885335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.885636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.885646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.759 qpair failed and we were unable to recover it. 00:31:38.759 [2024-12-07 05:46:41.885988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.886321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.886331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.759 qpair failed and we were unable to recover it. 00:31:38.759 [2024-12-07 05:46:41.886651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.886952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.886961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.759 qpair failed and we were unable to recover it. 00:31:38.759 [2024-12-07 05:46:41.887123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.887386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.887396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.759 qpair failed and we were unable to recover it. 00:31:38.759 [2024-12-07 05:46:41.887707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.887879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.887889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.759 qpair failed and we were unable to recover it. 00:31:38.759 [2024-12-07 05:46:41.888069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.888385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.888395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.759 qpair failed and we were unable to recover it. 00:31:38.759 [2024-12-07 05:46:41.888710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.889021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.889032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.759 qpair failed and we were unable to recover it. 00:31:38.759 [2024-12-07 05:46:41.889350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.889509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.889518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.759 qpair failed and we were unable to recover it. 00:31:38.759 [2024-12-07 05:46:41.889826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.890170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.890180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.759 qpair failed and we were unable to recover it. 00:31:38.759 [2024-12-07 05:46:41.890523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.890809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.890819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.759 qpair failed and we were unable to recover it. 00:31:38.759 [2024-12-07 05:46:41.891027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.891349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.891358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.759 qpair failed and we were unable to recover it. 00:31:38.759 [2024-12-07 05:46:41.891520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.891630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.891641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.759 qpair failed and we were unable to recover it. 00:31:38.759 [2024-12-07 05:46:41.891902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.892223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.892233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.759 qpair failed and we were unable to recover it. 00:31:38.759 [2024-12-07 05:46:41.892517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.892855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.759 [2024-12-07 05:46:41.892864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.760 qpair failed and we were unable to recover it. 00:31:38.760 [2024-12-07 05:46:41.893177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.893512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.893522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.760 qpair failed and we were unable to recover it. 00:31:38.760 [2024-12-07 05:46:41.893830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.894173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.894185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.760 qpair failed and we were unable to recover it. 00:31:38.760 [2024-12-07 05:46:41.894380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.894542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.894552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.760 qpair failed and we were unable to recover it. 00:31:38.760 [2024-12-07 05:46:41.894853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.895041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.895051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.760 qpair failed and we were unable to recover it. 00:31:38.760 [2024-12-07 05:46:41.895360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.895697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.895707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.760 qpair failed and we were unable to recover it. 00:31:38.760 [2024-12-07 05:46:41.895904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.896176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.896186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.760 qpair failed and we were unable to recover it. 00:31:38.760 [2024-12-07 05:46:41.896349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.896507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.896516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.760 qpair failed and we were unable to recover it. 00:31:38.760 [2024-12-07 05:46:41.896822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.897135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.897145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.760 qpair failed and we were unable to recover it. 00:31:38.760 [2024-12-07 05:46:41.897478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.897825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.897835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.760 qpair failed and we were unable to recover it. 00:31:38.760 [2024-12-07 05:46:41.898025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.898312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.898322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.760 qpair failed and we were unable to recover it. 00:31:38.760 [2024-12-07 05:46:41.898634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.898908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.898918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.760 qpair failed and we were unable to recover it. 00:31:38.760 [2024-12-07 05:46:41.899136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.899324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.899334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.760 qpair failed and we were unable to recover it. 00:31:38.760 [2024-12-07 05:46:41.899523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.899853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.899863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.760 qpair failed and we were unable to recover it. 00:31:38.760 [2024-12-07 05:46:41.900172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.900517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.900527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.760 qpair failed and we were unable to recover it. 00:31:38.760 [2024-12-07 05:46:41.900677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.900957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.900967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.760 qpair failed and we were unable to recover it. 00:31:38.760 [2024-12-07 05:46:41.901279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.901565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.901575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.760 qpair failed and we were unable to recover it. 00:31:38.760 [2024-12-07 05:46:41.901892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.902226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.902236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.760 qpair failed and we were unable to recover it. 00:31:38.760 [2024-12-07 05:46:41.902550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.902866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.902876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.760 qpair failed and we were unable to recover it. 00:31:38.760 [2024-12-07 05:46:41.903176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.903515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.903526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.760 qpair failed and we were unable to recover it. 00:31:38.760 [2024-12-07 05:46:41.903837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.904179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.904189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.760 qpair failed and we were unable to recover it. 00:31:38.760 [2024-12-07 05:46:41.904500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.904845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.904855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.760 qpair failed and we were unable to recover it. 00:31:38.760 [2024-12-07 05:46:41.905175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.905361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.905372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.760 qpair failed and we were unable to recover it. 00:31:38.760 [2024-12-07 05:46:41.905702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.905864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.905874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.760 qpair failed and we were unable to recover it. 00:31:38.760 [2024-12-07 05:46:41.906076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.906432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.906442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.760 qpair failed and we were unable to recover it. 00:31:38.760 [2024-12-07 05:46:41.906796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.906968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.906978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.760 qpair failed and we were unable to recover it. 00:31:38.760 [2024-12-07 05:46:41.907313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.907634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.907644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.760 qpair failed and we were unable to recover it. 00:31:38.760 [2024-12-07 05:46:41.907837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.908142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.908152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.760 qpair failed and we were unable to recover it. 00:31:38.760 [2024-12-07 05:46:41.908327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.908529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.760 [2024-12-07 05:46:41.908539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.760 qpair failed and we were unable to recover it. 00:31:38.760 [2024-12-07 05:46:41.908739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.909009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.909023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.761 qpair failed and we were unable to recover it. 00:31:38.761 [2024-12-07 05:46:41.909346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.909693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.909703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.761 qpair failed and we were unable to recover it. 00:31:38.761 [2024-12-07 05:46:41.910014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.910229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.910239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.761 qpair failed and we were unable to recover it. 00:31:38.761 [2024-12-07 05:46:41.910533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.910858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.910868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.761 qpair failed and we were unable to recover it. 00:31:38.761 [2024-12-07 05:46:41.911078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.911384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.911394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.761 qpair failed and we were unable to recover it. 00:31:38.761 [2024-12-07 05:46:41.911574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.911870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.911880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.761 qpair failed and we were unable to recover it. 00:31:38.761 [2024-12-07 05:46:41.912184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.912541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.912550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.761 qpair failed and we were unable to recover it. 00:31:38.761 [2024-12-07 05:46:41.912871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.913137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.913147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.761 qpair failed and we were unable to recover it. 00:31:38.761 [2024-12-07 05:46:41.913461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.913786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.913796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.761 qpair failed and we were unable to recover it. 00:31:38.761 [2024-12-07 05:46:41.914122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.914436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.914446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.761 qpair failed and we were unable to recover it. 00:31:38.761 [2024-12-07 05:46:41.914668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.914856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.914866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.761 qpair failed and we were unable to recover it. 00:31:38.761 [2024-12-07 05:46:41.915218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.915565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.915575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.761 qpair failed and we were unable to recover it. 00:31:38.761 [2024-12-07 05:46:41.915901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.916221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.916232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.761 qpair failed and we were unable to recover it. 00:31:38.761 [2024-12-07 05:46:41.916548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.916887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.916897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.761 qpair failed and we were unable to recover it. 00:31:38.761 [2024-12-07 05:46:41.917203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.917380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.917390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.761 qpair failed and we were unable to recover it. 00:31:38.761 [2024-12-07 05:46:41.917695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.917862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.917872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.761 qpair failed and we were unable to recover it. 00:31:38.761 [2024-12-07 05:46:41.918053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.918270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.918280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.761 qpair failed and we were unable to recover it. 00:31:38.761 [2024-12-07 05:46:41.918601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.918912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.918922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.761 qpair failed and we were unable to recover it. 00:31:38.761 [2024-12-07 05:46:41.919192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.919527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.919537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.761 qpair failed and we were unable to recover it. 00:31:38.761 [2024-12-07 05:46:41.919732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.919853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.919863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.761 qpair failed and we were unable to recover it. 00:31:38.761 [2024-12-07 05:46:41.920034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.920329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.920339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.761 qpair failed and we were unable to recover it. 00:31:38.761 [2024-12-07 05:46:41.920686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.920937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.920947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.761 qpair failed and we were unable to recover it. 00:31:38.761 [2024-12-07 05:46:41.921121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.921323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.921334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.761 qpair failed and we were unable to recover it. 00:31:38.761 [2024-12-07 05:46:41.921535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.921864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.921874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.761 qpair failed and we were unable to recover it. 00:31:38.761 [2024-12-07 05:46:41.922264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.922571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.922583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.761 qpair failed and we were unable to recover it. 00:31:38.761 [2024-12-07 05:46:41.922792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.922839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.922848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.761 qpair failed and we were unable to recover it. 00:31:38.761 [2024-12-07 05:46:41.923176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.923487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.923498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.761 qpair failed and we were unable to recover it. 00:31:38.761 [2024-12-07 05:46:41.923827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.924069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.924079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.761 qpair failed and we were unable to recover it. 00:31:38.761 [2024-12-07 05:46:41.924128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.924444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.761 [2024-12-07 05:46:41.924454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.762 qpair failed and we were unable to recover it. 00:31:38.762 [2024-12-07 05:46:41.924774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.924964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.924974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.762 qpair failed and we were unable to recover it. 00:31:38.762 [2024-12-07 05:46:41.925179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.925476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.925486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.762 qpair failed and we were unable to recover it. 00:31:38.762 [2024-12-07 05:46:41.925661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.925934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.925945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.762 qpair failed and we were unable to recover it. 00:31:38.762 [2024-12-07 05:46:41.926163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.926321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.926331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.762 qpair failed and we were unable to recover it. 00:31:38.762 [2024-12-07 05:46:41.926640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.926976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.926987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.762 qpair failed and we were unable to recover it. 00:31:38.762 [2024-12-07 05:46:41.927320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.927639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.927650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.762 qpair failed and we were unable to recover it. 00:31:38.762 [2024-12-07 05:46:41.927835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.928128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.928139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.762 qpair failed and we were unable to recover it. 00:31:38.762 [2024-12-07 05:46:41.928455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.928614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.928624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.762 qpair failed and we were unable to recover it. 00:31:38.762 [2024-12-07 05:46:41.928951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.929124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.929135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.762 qpair failed and we were unable to recover it. 00:31:38.762 [2024-12-07 05:46:41.929441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.929617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.929628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.762 qpair failed and we were unable to recover it. 00:31:38.762 [2024-12-07 05:46:41.929823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.930051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.930061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.762 qpair failed and we were unable to recover it. 00:31:38.762 [2024-12-07 05:46:41.930356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.930558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.930569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.762 qpair failed and we were unable to recover it. 00:31:38.762 [2024-12-07 05:46:41.930919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.931009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.931026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.762 qpair failed and we were unable to recover it. 00:31:38.762 [2024-12-07 05:46:41.931226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.931416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.931427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.762 qpair failed and we were unable to recover it. 00:31:38.762 [2024-12-07 05:46:41.931775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.932104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.932114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.762 qpair failed and we were unable to recover it. 00:31:38.762 [2024-12-07 05:46:41.932421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.932725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.932735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.762 qpair failed and we were unable to recover it. 00:31:38.762 [2024-12-07 05:46:41.933062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.933264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.933274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.762 qpair failed and we were unable to recover it. 00:31:38.762 [2024-12-07 05:46:41.933655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.933913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.933923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.762 qpair failed and we were unable to recover it. 00:31:38.762 [2024-12-07 05:46:41.934217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.934552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.934561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.762 qpair failed and we were unable to recover it. 00:31:38.762 [2024-12-07 05:46:41.934610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.934796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.934805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.762 qpair failed and we were unable to recover it. 00:31:38.762 [2024-12-07 05:46:41.935139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.935464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.935473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.762 qpair failed and we were unable to recover it. 00:31:38.762 [2024-12-07 05:46:41.935669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.935708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.935718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.762 qpair failed and we were unable to recover it. 00:31:38.762 [2024-12-07 05:46:41.935886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.936159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.936170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.762 qpair failed and we were unable to recover it. 00:31:38.762 [2024-12-07 05:46:41.936507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.936800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.936810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.762 qpair failed and we were unable to recover it. 00:31:38.762 [2024-12-07 05:46:41.937145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.762 [2024-12-07 05:46:41.937463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.937474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.763 qpair failed and we were unable to recover it. 00:31:38.763 [2024-12-07 05:46:41.937665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.937991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.938001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.763 qpair failed and we were unable to recover it. 00:31:38.763 [2024-12-07 05:46:41.938340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.938688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.938698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.763 qpair failed and we were unable to recover it. 00:31:38.763 [2024-12-07 05:46:41.938744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.939057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.939068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.763 qpair failed and we were unable to recover it. 00:31:38.763 [2024-12-07 05:46:41.939361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.939531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.939541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.763 qpair failed and we were unable to recover it. 00:31:38.763 [2024-12-07 05:46:41.939855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.940028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.940039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.763 qpair failed and we were unable to recover it. 00:31:38.763 [2024-12-07 05:46:41.940362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.940552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.940562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.763 qpair failed and we were unable to recover it. 00:31:38.763 [2024-12-07 05:46:41.940876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.941177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.941187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.763 qpair failed and we were unable to recover it. 00:31:38.763 [2024-12-07 05:46:41.941362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.941675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.941685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.763 qpair failed and we were unable to recover it. 00:31:38.763 [2024-12-07 05:46:41.942019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.942193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.942203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.763 qpair failed and we were unable to recover it. 00:31:38.763 [2024-12-07 05:46:41.942282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.942434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.942444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.763 qpair failed and we were unable to recover it. 00:31:38.763 [2024-12-07 05:46:41.942627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.942899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.942909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.763 qpair failed and we were unable to recover it. 00:31:38.763 [2024-12-07 05:46:41.943202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.943540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.943550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.763 qpair failed and we were unable to recover it. 00:31:38.763 [2024-12-07 05:46:41.943889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.944102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.944113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.763 qpair failed and we were unable to recover it. 00:31:38.763 [2024-12-07 05:46:41.944306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.944471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.944481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.763 qpair failed and we were unable to recover it. 00:31:38.763 [2024-12-07 05:46:41.944796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.945194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.945205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.763 qpair failed and we were unable to recover it. 00:31:38.763 [2024-12-07 05:46:41.945428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.945617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.945628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.763 qpair failed and we were unable to recover it. 00:31:38.763 [2024-12-07 05:46:41.945797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.946114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.946124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.763 qpair failed and we were unable to recover it. 00:31:38.763 [2024-12-07 05:46:41.946447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.946770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.946780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.763 qpair failed and we were unable to recover it. 00:31:38.763 [2024-12-07 05:46:41.947102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.947444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.947453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.763 qpair failed and we were unable to recover it. 00:31:38.763 [2024-12-07 05:46:41.947830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.948173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.948184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.763 qpair failed and we were unable to recover it. 00:31:38.763 [2024-12-07 05:46:41.948514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.948565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.948575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.763 qpair failed and we were unable to recover it. 00:31:38.763 [2024-12-07 05:46:41.948893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.949230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.949243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.763 qpair failed and we were unable to recover it. 00:31:38.763 [2024-12-07 05:46:41.949560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.949733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.949743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.763 qpair failed and we were unable to recover it. 00:31:38.763 [2024-12-07 05:46:41.949870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.950044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.950054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.763 qpair failed and we were unable to recover it. 00:31:38.763 [2024-12-07 05:46:41.950284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.950593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.950604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.763 qpair failed and we were unable to recover it. 00:31:38.763 [2024-12-07 05:46:41.950793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.950973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.950984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.763 qpair failed and we were unable to recover it. 00:31:38.763 [2024-12-07 05:46:41.951320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.951641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.951651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.763 qpair failed and we were unable to recover it. 00:31:38.763 [2024-12-07 05:46:41.951852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.952170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.763 [2024-12-07 05:46:41.952181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.763 qpair failed and we were unable to recover it. 00:31:38.763 [2024-12-07 05:46:41.952491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.952819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.952829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.764 qpair failed and we were unable to recover it. 00:31:38.764 [2024-12-07 05:46:41.953015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.953174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.953184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.764 qpair failed and we were unable to recover it. 00:31:38.764 [2024-12-07 05:46:41.953502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.953825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.953834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.764 qpair failed and we were unable to recover it. 00:31:38.764 [2024-12-07 05:46:41.953875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.954211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.954221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.764 qpair failed and we were unable to recover it. 00:31:38.764 [2024-12-07 05:46:41.954561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.954854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.954864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.764 qpair failed and we were unable to recover it. 00:31:38.764 [2024-12-07 05:46:41.955031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.955350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.955361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.764 qpair failed and we were unable to recover it. 00:31:38.764 [2024-12-07 05:46:41.955568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.955731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.955741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.764 qpair failed and we were unable to recover it. 00:31:38.764 [2024-12-07 05:46:41.956063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.956251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.956262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.764 qpair failed and we were unable to recover it. 00:31:38.764 [2024-12-07 05:46:41.956581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.956901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.956911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.764 qpair failed and we were unable to recover it. 00:31:38.764 [2024-12-07 05:46:41.957224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.957546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.957556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.764 qpair failed and we were unable to recover it. 00:31:38.764 [2024-12-07 05:46:41.957877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.958081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.958091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.764 qpair failed and we were unable to recover it. 00:31:38.764 [2024-12-07 05:46:41.958298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.958512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.958522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.764 qpair failed and we were unable to recover it. 00:31:38.764 [2024-12-07 05:46:41.958687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.958973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.958984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.764 qpair failed and we were unable to recover it. 00:31:38.764 [2024-12-07 05:46:41.959288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.959599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.959609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.764 qpair failed and we were unable to recover it. 00:31:38.764 [2024-12-07 05:46:41.959940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.960257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.960268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.764 qpair failed and we were unable to recover it. 00:31:38.764 [2024-12-07 05:46:41.960458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.960673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.960683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.764 qpair failed and we were unable to recover it. 00:31:38.764 [2024-12-07 05:46:41.961024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.961273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.961285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.764 qpair failed and we were unable to recover it. 00:31:38.764 [2024-12-07 05:46:41.961623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.961965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.961976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.764 qpair failed and we were unable to recover it. 00:31:38.764 [2024-12-07 05:46:41.962158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.962200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.962210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.764 qpair failed and we were unable to recover it. 00:31:38.764 [2024-12-07 05:46:41.962516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.962567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.962577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.764 qpair failed and we were unable to recover it. 00:31:38.764 [2024-12-07 05:46:41.962888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.963179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.963190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.764 qpair failed and we were unable to recover it. 00:31:38.764 [2024-12-07 05:46:41.963484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.963751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.963761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.764 qpair failed and we were unable to recover it. 00:31:38.764 [2024-12-07 05:46:41.964081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.964429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.964439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.764 qpair failed and we were unable to recover it. 00:31:38.764 [2024-12-07 05:46:41.964632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.964817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.964827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.764 qpair failed and we were unable to recover it. 00:31:38.764 [2024-12-07 05:46:41.965160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.965327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.965338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.764 qpair failed and we were unable to recover it. 00:31:38.764 [2024-12-07 05:46:41.965487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.965679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.965689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.764 qpair failed and we were unable to recover it. 00:31:38.764 [2024-12-07 05:46:41.965876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.966050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.966060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.764 qpair failed and we were unable to recover it. 00:31:38.764 [2024-12-07 05:46:41.966339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.966393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.966403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.764 qpair failed and we were unable to recover it. 00:31:38.764 [2024-12-07 05:46:41.966514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.966836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.764 [2024-12-07 05:46:41.966846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.765 qpair failed and we were unable to recover it. 00:31:38.765 [2024-12-07 05:46:41.967066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.765 [2024-12-07 05:46:41.967364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.765 [2024-12-07 05:46:41.967375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.765 qpair failed and we were unable to recover it. 00:31:38.765 [2024-12-07 05:46:41.967559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.765 [2024-12-07 05:46:41.967872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.765 [2024-12-07 05:46:41.967882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.765 qpair failed and we were unable to recover it. 00:31:38.765 [2024-12-07 05:46:41.968188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.765 [2024-12-07 05:46:41.968382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.765 [2024-12-07 05:46:41.968394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.765 qpair failed and we were unable to recover it. 00:31:38.765 [2024-12-07 05:46:41.968563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.765 [2024-12-07 05:46:41.968963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.765 [2024-12-07 05:46:41.968973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.765 qpair failed and we were unable to recover it. 00:31:38.765 [2024-12-07 05:46:41.969197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.765 [2024-12-07 05:46:41.969382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.765 [2024-12-07 05:46:41.969392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.765 qpair failed and we were unable to recover it. 00:31:38.765 [2024-12-07 05:46:41.969682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.765 [2024-12-07 05:46:41.970016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.765 [2024-12-07 05:46:41.970029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.765 qpair failed and we were unable to recover it. 00:31:38.765 [2024-12-07 05:46:41.970203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.765 [2024-12-07 05:46:41.970511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.765 [2024-12-07 05:46:41.970521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.765 qpair failed and we were unable to recover it. 00:31:38.765 [2024-12-07 05:46:41.970687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.765 [2024-12-07 05:46:41.971014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.765 [2024-12-07 05:46:41.971025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.765 qpair failed and we were unable to recover it. 00:31:38.765 [2024-12-07 05:46:41.971185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.765 [2024-12-07 05:46:41.971493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.765 [2024-12-07 05:46:41.971503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.765 qpair failed and we were unable to recover it. 00:31:38.765 [2024-12-07 05:46:41.971819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.765 [2024-12-07 05:46:41.972005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.765 [2024-12-07 05:46:41.972020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.765 qpair failed and we were unable to recover it. 00:31:38.765 [2024-12-07 05:46:41.972333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.765 [2024-12-07 05:46:41.972629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.765 [2024-12-07 05:46:41.972638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.765 qpair failed and we were unable to recover it. 00:31:38.765 [2024-12-07 05:46:41.972975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.765 [2024-12-07 05:46:41.973293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.765 [2024-12-07 05:46:41.973303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.765 qpair failed and we were unable to recover it. 00:31:38.765 [2024-12-07 05:46:41.973606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.765 [2024-12-07 05:46:41.973982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.765 [2024-12-07 05:46:41.973992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:38.765 qpair failed and we were unable to recover it. 00:31:39.035 [2024-12-07 05:46:41.974325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.035 [2024-12-07 05:46:41.974652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.974663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.036 qpair failed and we were unable to recover it. 00:31:39.036 [2024-12-07 05:46:41.974841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.975151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.975162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.036 qpair failed and we were unable to recover it. 00:31:39.036 [2024-12-07 05:46:41.975337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.975704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.975714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.036 qpair failed and we were unable to recover it. 00:31:39.036 [2024-12-07 05:46:41.976055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.976228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.976239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.036 qpair failed and we were unable to recover it. 00:31:39.036 [2024-12-07 05:46:41.976565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.976913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.976923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.036 qpair failed and we were unable to recover it. 00:31:39.036 [2024-12-07 05:46:41.977228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.977403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.977415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.036 qpair failed and we were unable to recover it. 00:31:39.036 [2024-12-07 05:46:41.977754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.978058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.978068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.036 qpair failed and we were unable to recover it. 00:31:39.036 [2024-12-07 05:46:41.978388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.978634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.978644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.036 qpair failed and we were unable to recover it. 00:31:39.036 [2024-12-07 05:46:41.978993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.979215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.979226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.036 qpair failed and we were unable to recover it. 00:31:39.036 [2024-12-07 05:46:41.979536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.979732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.979742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.036 qpair failed and we were unable to recover it. 00:31:39.036 [2024-12-07 05:46:41.980067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.980392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.980403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.036 qpair failed and we were unable to recover it. 00:31:39.036 [2024-12-07 05:46:41.980710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.981037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.981047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.036 qpair failed and we were unable to recover it. 00:31:39.036 [2024-12-07 05:46:41.981339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.981669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.981678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.036 qpair failed and we were unable to recover it. 00:31:39.036 [2024-12-07 05:46:41.981994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.982302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.982312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.036 qpair failed and we were unable to recover it. 00:31:39.036 [2024-12-07 05:46:41.982626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.982813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.982823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.036 qpair failed and we were unable to recover it. 00:31:39.036 [2024-12-07 05:46:41.983148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.983356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.983366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.036 qpair failed and we were unable to recover it. 00:31:39.036 [2024-12-07 05:46:41.983535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.983846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.983856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.036 qpair failed and we were unable to recover it. 00:31:39.036 [2024-12-07 05:46:41.984177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.984387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.984397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.036 qpair failed and we were unable to recover it. 00:31:39.036 [2024-12-07 05:46:41.984701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.985019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.985030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.036 qpair failed and we were unable to recover it. 00:31:39.036 [2024-12-07 05:46:41.985391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.985587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.985597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.036 qpair failed and we were unable to recover it. 00:31:39.036 [2024-12-07 05:46:41.985771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.985966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.985977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.036 qpair failed and we were unable to recover it. 00:31:39.036 [2024-12-07 05:46:41.986282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.986601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.986611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.036 qpair failed and we were unable to recover it. 00:31:39.036 [2024-12-07 05:46:41.986791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.987070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.987080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.036 qpair failed and we were unable to recover it. 00:31:39.036 [2024-12-07 05:46:41.987393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.987594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.987604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.036 qpair failed and we were unable to recover it. 00:31:39.036 [2024-12-07 05:46:41.987875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.988170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.988180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.036 qpair failed and we were unable to recover it. 00:31:39.036 [2024-12-07 05:46:41.988513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.988859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.988869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.036 qpair failed and we were unable to recover it. 00:31:39.036 [2024-12-07 05:46:41.989179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.036 [2024-12-07 05:46:41.989369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.989379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.037 qpair failed and we were unable to recover it. 00:31:39.037 [2024-12-07 05:46:41.989562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.989850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.989860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.037 qpair failed and we were unable to recover it. 00:31:39.037 [2024-12-07 05:46:41.990233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.990420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.990430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.037 qpair failed and we were unable to recover it. 00:31:39.037 [2024-12-07 05:46:41.990476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.990643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.990654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.037 qpair failed and we were unable to recover it. 00:31:39.037 [2024-12-07 05:46:41.990977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.991288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.991299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.037 qpair failed and we were unable to recover it. 00:31:39.037 [2024-12-07 05:46:41.991590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.991867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.991876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.037 qpair failed and we were unable to recover it. 00:31:39.037 [2024-12-07 05:46:41.992186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.992365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.992375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.037 qpair failed and we were unable to recover it. 00:31:39.037 [2024-12-07 05:46:41.992640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.992956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.992967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.037 qpair failed and we were unable to recover it. 00:31:39.037 [2024-12-07 05:46:41.993157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.993317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.993328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.037 qpair failed and we were unable to recover it. 00:31:39.037 [2024-12-07 05:46:41.993618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.993919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.993929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.037 qpair failed and we were unable to recover it. 00:31:39.037 [2024-12-07 05:46:41.994306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.994456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.994467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.037 qpair failed and we were unable to recover it. 00:31:39.037 [2024-12-07 05:46:41.994781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.995091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.995102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.037 qpair failed and we were unable to recover it. 00:31:39.037 [2024-12-07 05:46:41.995407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.995727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.995737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.037 qpair failed and we were unable to recover it. 00:31:39.037 [2024-12-07 05:46:41.996066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.996233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.996243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.037 qpair failed and we were unable to recover it. 00:31:39.037 [2024-12-07 05:46:41.996543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.996832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.996842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.037 qpair failed and we were unable to recover it. 00:31:39.037 [2024-12-07 05:46:41.997172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.997522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.997532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.037 qpair failed and we were unable to recover it. 00:31:39.037 [2024-12-07 05:46:41.997726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.998009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.998023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.037 qpair failed and we were unable to recover it. 00:31:39.037 [2024-12-07 05:46:41.998361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.998521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.998533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.037 qpair failed and we were unable to recover it. 00:31:39.037 [2024-12-07 05:46:41.998799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.999122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.999133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.037 qpair failed and we were unable to recover it. 00:31:39.037 [2024-12-07 05:46:41.999434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.999763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:41.999773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.037 qpair failed and we were unable to recover it. 00:31:39.037 [2024-12-07 05:46:41.999944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:42.000287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:42.000298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.037 qpair failed and we were unable to recover it. 00:31:39.037 [2024-12-07 05:46:42.000630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:42.000931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:42.000941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.037 qpair failed and we were unable to recover it. 00:31:39.037 [2024-12-07 05:46:42.000991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:42.001325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:42.001336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.037 qpair failed and we were unable to recover it. 00:31:39.037 [2024-12-07 05:46:42.001549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:42.001809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:42.001819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.037 qpair failed and we were unable to recover it. 00:31:39.037 [2024-12-07 05:46:42.002013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:42.002188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:42.002198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.037 qpair failed and we were unable to recover it. 00:31:39.037 [2024-12-07 05:46:42.002499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:42.002840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.037 [2024-12-07 05:46:42.002849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.037 qpair failed and we were unable to recover it. 00:31:39.038 [2024-12-07 05:46:42.003140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.003209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.003218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.038 qpair failed and we were unable to recover it. 00:31:39.038 [2024-12-07 05:46:42.003512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.003683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.003693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.038 qpair failed and we were unable to recover it. 00:31:39.038 [2024-12-07 05:46:42.004023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.004307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.004317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.038 qpair failed and we were unable to recover it. 00:31:39.038 [2024-12-07 05:46:42.004650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.004987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.004997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.038 qpair failed and we were unable to recover it. 00:31:39.038 [2024-12-07 05:46:42.005322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.005488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.005498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.038 qpair failed and we were unable to recover it. 00:31:39.038 [2024-12-07 05:46:42.005834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.005991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.006001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.038 qpair failed and we were unable to recover it. 00:31:39.038 [2024-12-07 05:46:42.006321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.006505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.006515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.038 qpair failed and we were unable to recover it. 00:31:39.038 [2024-12-07 05:46:42.006829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.007150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.007160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.038 qpair failed and we were unable to recover it. 00:31:39.038 [2024-12-07 05:46:42.007463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.007660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.007670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.038 qpair failed and we were unable to recover it. 00:31:39.038 [2024-12-07 05:46:42.008007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.008328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.008338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.038 qpair failed and we were unable to recover it. 00:31:39.038 [2024-12-07 05:46:42.008648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.008963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.008973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.038 qpair failed and we were unable to recover it. 00:31:39.038 [2024-12-07 05:46:42.009261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.009419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.009430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.038 qpair failed and we were unable to recover it. 00:31:39.038 [2024-12-07 05:46:42.009616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.009929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.009939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.038 qpair failed and we were unable to recover it. 00:31:39.038 [2024-12-07 05:46:42.010258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.010328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.010337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.038 qpair failed and we were unable to recover it. 00:31:39.038 [2024-12-07 05:46:42.010631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.010971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.010981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.038 qpair failed and we were unable to recover it. 00:31:39.038 [2024-12-07 05:46:42.011319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.011523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.011533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.038 qpair failed and we were unable to recover it. 00:31:39.038 [2024-12-07 05:46:42.011719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.012043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.012053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.038 qpair failed and we were unable to recover it. 00:31:39.038 [2024-12-07 05:46:42.012217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.012398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.012407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.038 qpair failed and we were unable to recover it. 00:31:39.038 [2024-12-07 05:46:42.012726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.013040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.013050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.038 qpair failed and we were unable to recover it. 00:31:39.038 [2024-12-07 05:46:42.013227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.013514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.013523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.038 qpair failed and we were unable to recover it. 00:31:39.038 [2024-12-07 05:46:42.013838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.014176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.014186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.038 qpair failed and we were unable to recover it. 00:31:39.038 [2024-12-07 05:46:42.014341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.014631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.014641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.038 qpair failed and we were unable to recover it. 00:31:39.038 [2024-12-07 05:46:42.014952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.015269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.015279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.038 qpair failed and we were unable to recover it. 00:31:39.038 [2024-12-07 05:46:42.015637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.015931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.015941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.038 qpair failed and we were unable to recover it. 00:31:39.038 [2024-12-07 05:46:42.016245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.016401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.016411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.038 qpair failed and we were unable to recover it. 00:31:39.038 [2024-12-07 05:46:42.016718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.017003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.017016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.038 qpair failed and we were unable to recover it. 00:31:39.038 [2024-12-07 05:46:42.017195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.017530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.038 [2024-12-07 05:46:42.017539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.038 qpair failed and we were unable to recover it. 00:31:39.038 [2024-12-07 05:46:42.017872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.018219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.018228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.039 qpair failed and we were unable to recover it. 00:31:39.039 [2024-12-07 05:46:42.018536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.018699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.018709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.039 qpair failed and we were unable to recover it. 00:31:39.039 [2024-12-07 05:46:42.018878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.019190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.019200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.039 qpair failed and we were unable to recover it. 00:31:39.039 [2024-12-07 05:46:42.019530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.019850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.019860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.039 qpair failed and we were unable to recover it. 00:31:39.039 [2024-12-07 05:46:42.020189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.020508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.020518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.039 qpair failed and we were unable to recover it. 00:31:39.039 [2024-12-07 05:46:42.020829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.021154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.021164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.039 qpair failed and we were unable to recover it. 00:31:39.039 [2024-12-07 05:46:42.021498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.021842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.021852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.039 qpair failed and we were unable to recover it. 00:31:39.039 [2024-12-07 05:46:42.022172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.022488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.022498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.039 qpair failed and we were unable to recover it. 00:31:39.039 [2024-12-07 05:46:42.022834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.023149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.023160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.039 qpair failed and we were unable to recover it. 00:31:39.039 [2024-12-07 05:46:42.023470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.023779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.023788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.039 qpair failed and we were unable to recover it. 00:31:39.039 [2024-12-07 05:46:42.024100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.024443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.024454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.039 qpair failed and we were unable to recover it. 00:31:39.039 [2024-12-07 05:46:42.024637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.024878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.024888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.039 qpair failed and we were unable to recover it. 00:31:39.039 [2024-12-07 05:46:42.025199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.025531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.025541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.039 qpair failed and we were unable to recover it. 00:31:39.039 [2024-12-07 05:46:42.025851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.026211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.026222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.039 qpair failed and we were unable to recover it. 00:31:39.039 [2024-12-07 05:46:42.026551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.026730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.026740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.039 qpair failed and we were unable to recover it. 00:31:39.039 [2024-12-07 05:46:42.026924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.027114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.027126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.039 qpair failed and we were unable to recover it. 00:31:39.039 [2024-12-07 05:46:42.027455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.027773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.027783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.039 qpair failed and we were unable to recover it. 00:31:39.039 [2024-12-07 05:46:42.028092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.028139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.028150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.039 qpair failed and we were unable to recover it. 00:31:39.039 [2024-12-07 05:46:42.028415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.028755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.028765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.039 qpair failed and we were unable to recover it. 00:31:39.039 [2024-12-07 05:46:42.028988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.029182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.029192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.039 qpair failed and we were unable to recover it. 00:31:39.039 [2024-12-07 05:46:42.029349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.029637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.029646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.039 qpair failed and we were unable to recover it. 00:31:39.039 [2024-12-07 05:46:42.029953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.030289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.030300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.039 qpair failed and we were unable to recover it. 00:31:39.039 [2024-12-07 05:46:42.030602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.030768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.030777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.039 qpair failed and we were unable to recover it. 00:31:39.039 [2024-12-07 05:46:42.031128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.031356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.031365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.039 qpair failed and we were unable to recover it. 00:31:39.039 [2024-12-07 05:46:42.031650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.031955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.031965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.039 qpair failed and we were unable to recover it. 00:31:39.039 [2024-12-07 05:46:42.032137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.032478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.032488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.039 qpair failed and we were unable to recover it. 00:31:39.039 [2024-12-07 05:46:42.032798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.033094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.033104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.039 qpair failed and we were unable to recover it. 00:31:39.039 [2024-12-07 05:46:42.033415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.033607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.033616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.039 qpair failed and we were unable to recover it. 00:31:39.039 [2024-12-07 05:46:42.033783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.039 [2024-12-07 05:46:42.033990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.034000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.040 qpair failed and we were unable to recover it. 00:31:39.040 [2024-12-07 05:46:42.034169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.034504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.034513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.040 qpair failed and we were unable to recover it. 00:31:39.040 [2024-12-07 05:46:42.034823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.035134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.035145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.040 qpair failed and we were unable to recover it. 00:31:39.040 [2024-12-07 05:46:42.035325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.035483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.035493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.040 qpair failed and we were unable to recover it. 00:31:39.040 [2024-12-07 05:46:42.035664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.035860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.035870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.040 qpair failed and we were unable to recover it. 00:31:39.040 [2024-12-07 05:46:42.036202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.036559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.036569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.040 qpair failed and we were unable to recover it. 00:31:39.040 [2024-12-07 05:46:42.036740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.037079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.037090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.040 qpair failed and we were unable to recover it. 00:31:39.040 [2024-12-07 05:46:42.037402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.037728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.037738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.040 qpair failed and we were unable to recover it. 00:31:39.040 [2024-12-07 05:46:42.038078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.038236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.038246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.040 qpair failed and we were unable to recover it. 00:31:39.040 [2024-12-07 05:46:42.038575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.038895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.038906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.040 qpair failed and we were unable to recover it. 00:31:39.040 [2024-12-07 05:46:42.039239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.039535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.039545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.040 qpair failed and we were unable to recover it. 00:31:39.040 [2024-12-07 05:46:42.039861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.040159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.040169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.040 qpair failed and we were unable to recover it. 00:31:39.040 [2024-12-07 05:46:42.040442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.040749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.040759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.040 qpair failed and we were unable to recover it. 00:31:39.040 [2024-12-07 05:46:42.041064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.041234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.041244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.040 qpair failed and we were unable to recover it. 00:31:39.040 [2024-12-07 05:46:42.041437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.041480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.041489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.040 qpair failed and we were unable to recover it. 00:31:39.040 [2024-12-07 05:46:42.041787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.041979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.041989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.040 qpair failed and we were unable to recover it. 00:31:39.040 [2024-12-07 05:46:42.042297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.042457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.042467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.040 qpair failed and we were unable to recover it. 00:31:39.040 [2024-12-07 05:46:42.042797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.043103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.043114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.040 qpair failed and we were unable to recover it. 00:31:39.040 [2024-12-07 05:46:42.043350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.043504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.043514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.040 qpair failed and we were unable to recover it. 00:31:39.040 [2024-12-07 05:46:42.043801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.044089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.044100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.040 qpair failed and we were unable to recover it. 00:31:39.040 [2024-12-07 05:46:42.044418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.044732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.044741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.040 qpair failed and we were unable to recover it. 00:31:39.040 [2024-12-07 05:46:42.045052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.045356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.045366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.040 qpair failed and we were unable to recover it. 00:31:39.040 [2024-12-07 05:46:42.045684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.045854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.045865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.040 qpair failed and we were unable to recover it. 00:31:39.040 [2024-12-07 05:46:42.046170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.046337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.040 [2024-12-07 05:46:42.046346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.041 qpair failed and we were unable to recover it. 00:31:39.041 [2024-12-07 05:46:42.046530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.046709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.046718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.041 qpair failed and we were unable to recover it. 00:31:39.041 [2024-12-07 05:46:42.046993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.047198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.047208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.041 qpair failed and we were unable to recover it. 00:31:39.041 [2024-12-07 05:46:42.047458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.047757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.047767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.041 qpair failed and we were unable to recover it. 00:31:39.041 [2024-12-07 05:46:42.047988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.048293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.048303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.041 qpair failed and we were unable to recover it. 00:31:39.041 [2024-12-07 05:46:42.048463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.048763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.048776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.041 qpair failed and we were unable to recover it. 00:31:39.041 [2024-12-07 05:46:42.049086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.049383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.049393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.041 qpair failed and we were unable to recover it. 00:31:39.041 [2024-12-07 05:46:42.049727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.050067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.050078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.041 qpair failed and we were unable to recover it. 00:31:39.041 [2024-12-07 05:46:42.050249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.050535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.050545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.041 qpair failed and we were unable to recover it. 00:31:39.041 [2024-12-07 05:46:42.050824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.051151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.051161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.041 qpair failed and we were unable to recover it. 00:31:39.041 [2024-12-07 05:46:42.051333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.051534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.051544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.041 qpair failed and we were unable to recover it. 00:31:39.041 [2024-12-07 05:46:42.051822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.052161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.052172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.041 qpair failed and we were unable to recover it. 00:31:39.041 [2024-12-07 05:46:42.052472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.052691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.052700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.041 qpair failed and we were unable to recover it. 00:31:39.041 [2024-12-07 05:46:42.052990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.053300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.053311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.041 qpair failed and we were unable to recover it. 00:31:39.041 [2024-12-07 05:46:42.053551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.053853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.053863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.041 qpair failed and we were unable to recover it. 00:31:39.041 [2024-12-07 05:46:42.054160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.054495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.054505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.041 qpair failed and we were unable to recover it. 00:31:39.041 [2024-12-07 05:46:42.054838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.055182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.055192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.041 qpair failed and we were unable to recover it. 00:31:39.041 [2024-12-07 05:46:42.055493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.055805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.055814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.041 qpair failed and we were unable to recover it. 00:31:39.041 [2024-12-07 05:46:42.055984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.056029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.056039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.041 qpair failed and we were unable to recover it. 00:31:39.041 [2024-12-07 05:46:42.056225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.056421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.056430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.041 qpair failed and we were unable to recover it. 00:31:39.041 [2024-12-07 05:46:42.056778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.056945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.056955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.041 qpair failed and we were unable to recover it. 00:31:39.041 [2024-12-07 05:46:42.057143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.057418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.057428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.041 qpair failed and we were unable to recover it. 00:31:39.041 [2024-12-07 05:46:42.057752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.058086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.058096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.041 qpair failed and we were unable to recover it. 00:31:39.041 [2024-12-07 05:46:42.058262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.058563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.058572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.041 qpair failed and we were unable to recover it. 00:31:39.041 [2024-12-07 05:46:42.058858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.059032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.059043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.041 qpair failed and we were unable to recover it. 00:31:39.041 [2024-12-07 05:46:42.059243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.059528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.059538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.041 qpair failed and we were unable to recover it. 00:31:39.041 [2024-12-07 05:46:42.059669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.059923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.059933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.041 qpair failed and we were unable to recover it. 00:31:39.041 [2024-12-07 05:46:42.060262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.060533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.060543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.041 qpair failed and we were unable to recover it. 00:31:39.041 [2024-12-07 05:46:42.060836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.061172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.041 [2024-12-07 05:46:42.061182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.041 qpair failed and we were unable to recover it. 00:31:39.041 [2024-12-07 05:46:42.061476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.061645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.061655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.042 qpair failed and we were unable to recover it. 00:31:39.042 [2024-12-07 05:46:42.061984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.062328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.062339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.042 qpair failed and we were unable to recover it. 00:31:39.042 [2024-12-07 05:46:42.062685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.063017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.063027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.042 qpair failed and we were unable to recover it. 00:31:39.042 [2024-12-07 05:46:42.063317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.063633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.063643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.042 qpair failed and we were unable to recover it. 00:31:39.042 [2024-12-07 05:46:42.063927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.064254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.064265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.042 qpair failed and we were unable to recover it. 00:31:39.042 [2024-12-07 05:46:42.064487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.064708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.064718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.042 qpair failed and we were unable to recover it. 00:31:39.042 [2024-12-07 05:46:42.065086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.065271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.065281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.042 qpair failed and we were unable to recover it. 00:31:39.042 [2024-12-07 05:46:42.065599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.065918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.065927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.042 qpair failed and we were unable to recover it. 00:31:39.042 [2024-12-07 05:46:42.066226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.066486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.066496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.042 qpair failed and we were unable to recover it. 00:31:39.042 [2024-12-07 05:46:42.066535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.066817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.066828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.042 qpair failed and we were unable to recover it. 00:31:39.042 [2024-12-07 05:46:42.067136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.067470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.067480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.042 qpair failed and we were unable to recover it. 00:31:39.042 [2024-12-07 05:46:42.067782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.067947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.067956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.042 qpair failed and we were unable to recover it. 00:31:39.042 [2024-12-07 05:46:42.068282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.068604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.068614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.042 qpair failed and we were unable to recover it. 00:31:39.042 [2024-12-07 05:46:42.068926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.069211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.069221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.042 qpair failed and we were unable to recover it. 00:31:39.042 [2024-12-07 05:46:42.069526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.069840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.069850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.042 qpair failed and we were unable to recover it. 00:31:39.042 [2024-12-07 05:46:42.070158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.070520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.070530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.042 qpair failed and we were unable to recover it. 00:31:39.042 [2024-12-07 05:46:42.070828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.071145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.071155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.042 qpair failed and we were unable to recover it. 00:31:39.042 [2024-12-07 05:46:42.071475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.071791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.071800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.042 qpair failed and we were unable to recover it. 00:31:39.042 [2024-12-07 05:46:42.072107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.072288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.072298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.042 qpair failed and we were unable to recover it. 00:31:39.042 [2024-12-07 05:46:42.072623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.072940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.072951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.042 qpair failed and we were unable to recover it. 00:31:39.042 [2024-12-07 05:46:42.073256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.073576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.073585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.042 qpair failed and we were unable to recover it. 00:31:39.042 [2024-12-07 05:46:42.073747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.074060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.074069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.042 qpair failed and we were unable to recover it. 00:31:39.042 [2024-12-07 05:46:42.074402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.074689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.074699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.042 qpair failed and we were unable to recover it. 00:31:39.042 [2024-12-07 05:46:42.074867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.075175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.075185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.042 qpair failed and we were unable to recover it. 00:31:39.042 [2024-12-07 05:46:42.075312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.075513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.075522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.042 qpair failed and we were unable to recover it. 00:31:39.042 [2024-12-07 05:46:42.075834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.076132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.076143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.042 qpair failed and we were unable to recover it. 00:31:39.042 [2024-12-07 05:46:42.076446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.076765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.076774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.042 qpair failed and we were unable to recover it. 00:31:39.042 [2024-12-07 05:46:42.077084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.077276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.042 [2024-12-07 05:46:42.077289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.042 qpair failed and we were unable to recover it. 00:31:39.042 [2024-12-07 05:46:42.077629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.077952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.077962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.043 qpair failed and we were unable to recover it. 00:31:39.043 [2024-12-07 05:46:42.078324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.078643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.078653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.043 qpair failed and we were unable to recover it. 00:31:39.043 [2024-12-07 05:46:42.079037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.079302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.079312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.043 qpair failed and we were unable to recover it. 00:31:39.043 [2024-12-07 05:46:42.079621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.079914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.079924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.043 qpair failed and we were unable to recover it. 00:31:39.043 [2024-12-07 05:46:42.080225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.080386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.080395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.043 qpair failed and we were unable to recover it. 00:31:39.043 [2024-12-07 05:46:42.080716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.080760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.080769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.043 qpair failed and we were unable to recover it. 00:31:39.043 [2024-12-07 05:46:42.081080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.081291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.081301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.043 qpair failed and we were unable to recover it. 00:31:39.043 [2024-12-07 05:46:42.081612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.081934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.081944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.043 qpair failed and we were unable to recover it. 00:31:39.043 [2024-12-07 05:46:42.082253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.082416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.082427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.043 qpair failed and we were unable to recover it. 00:31:39.043 [2024-12-07 05:46:42.082600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.082870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.082880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.043 qpair failed and we were unable to recover it. 00:31:39.043 [2024-12-07 05:46:42.083054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.083211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.083220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.043 qpair failed and we were unable to recover it. 00:31:39.043 [2024-12-07 05:46:42.083534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.083830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.083839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.043 qpair failed and we were unable to recover it. 00:31:39.043 [2024-12-07 05:46:42.084140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.084414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.084424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.043 qpair failed and we were unable to recover it. 00:31:39.043 [2024-12-07 05:46:42.084773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.085036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.085046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.043 qpair failed and we were unable to recover it. 00:31:39.043 [2024-12-07 05:46:42.085374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.085669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.085679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.043 qpair failed and we were unable to recover it. 00:31:39.043 [2024-12-07 05:46:42.085990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.086306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.086317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.043 qpair failed and we were unable to recover it. 00:31:39.043 [2024-12-07 05:46:42.086602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.086922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.086932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.043 qpair failed and we were unable to recover it. 00:31:39.043 [2024-12-07 05:46:42.087254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.087546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.087555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.043 qpair failed and we were unable to recover it. 00:31:39.043 [2024-12-07 05:46:42.087767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.087955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.087965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.043 qpair failed and we were unable to recover it. 00:31:39.043 [2024-12-07 05:46:42.088244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.088426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.088436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.043 qpair failed and we were unable to recover it. 00:31:39.043 [2024-12-07 05:46:42.088719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.089042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.089053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.043 qpair failed and we were unable to recover it. 00:31:39.043 [2024-12-07 05:46:42.089240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.089421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.089431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.043 qpair failed and we were unable to recover it. 00:31:39.043 [2024-12-07 05:46:42.089718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.089914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.089924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.043 qpair failed and we were unable to recover it. 00:31:39.043 [2024-12-07 05:46:42.090238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.090438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.090448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.043 qpair failed and we were unable to recover it. 00:31:39.043 [2024-12-07 05:46:42.090633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.090953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.090963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.043 qpair failed and we were unable to recover it. 00:31:39.043 [2024-12-07 05:46:42.091274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.091591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.091601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.043 qpair failed and we were unable to recover it. 00:31:39.043 [2024-12-07 05:46:42.091941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.092216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.092226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.043 qpair failed and we were unable to recover it. 00:31:39.043 [2024-12-07 05:46:42.092504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.092841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.043 [2024-12-07 05:46:42.092851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.043 qpair failed and we were unable to recover it. 00:31:39.043 [2024-12-07 05:46:42.093186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.093497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.093507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.044 qpair failed and we were unable to recover it. 00:31:39.044 [2024-12-07 05:46:42.093932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.094265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.094277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.044 qpair failed and we were unable to recover it. 00:31:39.044 [2024-12-07 05:46:42.094605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.094650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.094660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.044 qpair failed and we were unable to recover it. 00:31:39.044 [2024-12-07 05:46:42.094923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.095226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.095236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.044 qpair failed and we were unable to recover it. 00:31:39.044 [2024-12-07 05:46:42.095568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.095753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.095763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.044 qpair failed and we were unable to recover it. 00:31:39.044 [2024-12-07 05:46:42.095947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.096159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.096169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.044 qpair failed and we were unable to recover it. 00:31:39.044 [2024-12-07 05:46:42.096520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.096821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.096831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.044 qpair failed and we were unable to recover it. 00:31:39.044 [2024-12-07 05:46:42.097145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.097321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.097330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.044 qpair failed and we were unable to recover it. 00:31:39.044 [2024-12-07 05:46:42.097486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.097768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.097778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.044 qpair failed and we were unable to recover it. 00:31:39.044 [2024-12-07 05:46:42.098086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.098400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.098410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.044 qpair failed and we were unable to recover it. 00:31:39.044 [2024-12-07 05:46:42.098580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.098761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.098771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.044 qpair failed and we were unable to recover it. 00:31:39.044 [2024-12-07 05:46:42.099082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.099162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.099172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.044 qpair failed and we were unable to recover it. 00:31:39.044 [2024-12-07 05:46:42.099480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.099635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.099645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.044 qpair failed and we were unable to recover it. 00:31:39.044 [2024-12-07 05:46:42.099802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.100072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.100082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.044 qpair failed and we were unable to recover it. 00:31:39.044 [2024-12-07 05:46:42.100373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.100657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.100666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.044 qpair failed and we were unable to recover it. 00:31:39.044 [2024-12-07 05:46:42.100989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.101181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.101191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.044 qpair failed and we were unable to recover it. 00:31:39.044 [2024-12-07 05:46:42.101367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.101674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.101684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.044 qpair failed and we were unable to recover it. 00:31:39.044 [2024-12-07 05:46:42.101985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.102324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.102334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.044 qpair failed and we were unable to recover it. 00:31:39.044 [2024-12-07 05:46:42.102495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.102831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.102840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.044 qpair failed and we were unable to recover it. 00:31:39.044 [2024-12-07 05:46:42.103013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.103317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.103326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.044 qpair failed and we were unable to recover it. 00:31:39.044 [2024-12-07 05:46:42.103658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.103856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.103866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.044 qpair failed and we were unable to recover it. 00:31:39.044 [2024-12-07 05:46:42.104092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.104453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.104463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.044 qpair failed and we were unable to recover it. 00:31:39.044 [2024-12-07 05:46:42.104807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.105129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.105142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.044 qpair failed and we were unable to recover it. 00:31:39.044 [2024-12-07 05:46:42.105532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.105782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.105792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.044 qpair failed and we were unable to recover it. 00:31:39.044 [2024-12-07 05:46:42.106119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.106290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.106300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.044 qpair failed and we were unable to recover it. 00:31:39.044 [2024-12-07 05:46:42.106640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.106965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.106975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.044 qpair failed and we were unable to recover it. 00:31:39.044 [2024-12-07 05:46:42.107276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.107317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.107326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.044 qpair failed and we were unable to recover it. 00:31:39.044 [2024-12-07 05:46:42.107642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.107969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.107979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.044 qpair failed and we were unable to recover it. 00:31:39.044 [2024-12-07 05:46:42.108307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.108626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.044 [2024-12-07 05:46:42.108636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.045 qpair failed and we were unable to recover it. 00:31:39.045 [2024-12-07 05:46:42.108915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.109241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.109252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.045 qpair failed and we were unable to recover it. 00:31:39.045 [2024-12-07 05:46:42.109407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.109690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.109700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.045 qpair failed and we were unable to recover it. 00:31:39.045 [2024-12-07 05:46:42.109884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.110063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.110074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.045 qpair failed and we were unable to recover it. 00:31:39.045 [2024-12-07 05:46:42.110379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.110566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.110576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.045 qpair failed and we were unable to recover it. 00:31:39.045 [2024-12-07 05:46:42.110910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.111071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.111081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.045 qpair failed and we were unable to recover it. 00:31:39.045 [2024-12-07 05:46:42.111273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.111440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.111450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.045 qpair failed and we were unable to recover it. 00:31:39.045 [2024-12-07 05:46:42.111779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.111937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.111947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.045 qpair failed and we were unable to recover it. 00:31:39.045 [2024-12-07 05:46:42.112167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.112389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.112400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.045 qpair failed and we were unable to recover it. 00:31:39.045 [2024-12-07 05:46:42.112710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.113016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.113026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.045 qpair failed and we were unable to recover it. 00:31:39.045 [2024-12-07 05:46:42.113322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.113478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.113488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.045 qpair failed and we were unable to recover it. 00:31:39.045 [2024-12-07 05:46:42.113730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.114023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.114033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.045 qpair failed and we were unable to recover it. 00:31:39.045 [2024-12-07 05:46:42.114223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.114550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.114561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.045 qpair failed and we were unable to recover it. 00:31:39.045 [2024-12-07 05:46:42.114742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.115003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.115016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.045 qpair failed and we were unable to recover it. 00:31:39.045 [2024-12-07 05:46:42.115325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.115517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.115526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.045 qpair failed and we were unable to recover it. 00:31:39.045 [2024-12-07 05:46:42.115897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.116198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.116208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.045 qpair failed and we were unable to recover it. 00:31:39.045 [2024-12-07 05:46:42.116536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.116854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.116864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.045 qpair failed and we were unable to recover it. 00:31:39.045 [2024-12-07 05:46:42.117174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.117483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.117493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.045 qpair failed and we were unable to recover it. 00:31:39.045 [2024-12-07 05:46:42.117799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.118078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.118097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.045 qpair failed and we were unable to recover it. 00:31:39.045 [2024-12-07 05:46:42.118262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.118567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.118577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.045 qpair failed and we were unable to recover it. 00:31:39.045 [2024-12-07 05:46:42.118936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.119088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.119098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.045 qpair failed and we were unable to recover it. 00:31:39.045 [2024-12-07 05:46:42.119424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.119621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.119631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.045 qpair failed and we were unable to recover it. 00:31:39.045 [2024-12-07 05:46:42.119965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.120281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.120291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.045 qpair failed and we were unable to recover it. 00:31:39.045 [2024-12-07 05:46:42.120461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.120752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.120761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.045 qpair failed and we were unable to recover it. 00:31:39.045 [2024-12-07 05:46:42.121093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.121240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.121250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.045 qpair failed and we were unable to recover it. 00:31:39.045 [2024-12-07 05:46:42.121442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.045 [2024-12-07 05:46:42.121606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.121616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.046 qpair failed and we were unable to recover it. 00:31:39.046 [2024-12-07 05:46:42.121769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.121941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.121951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.046 qpair failed and we were unable to recover it. 00:31:39.046 [2024-12-07 05:46:42.122275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.122585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.122594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.046 qpair failed and we were unable to recover it. 00:31:39.046 [2024-12-07 05:46:42.122876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.123144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.123154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.046 qpair failed and we were unable to recover it. 00:31:39.046 [2024-12-07 05:46:42.123319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.123498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.123508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.046 qpair failed and we were unable to recover it. 00:31:39.046 [2024-12-07 05:46:42.123823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.124006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.124020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.046 qpair failed and we were unable to recover it. 00:31:39.046 [2024-12-07 05:46:42.124350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.124650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.124660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.046 qpair failed and we were unable to recover it. 00:31:39.046 [2024-12-07 05:46:42.124972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.125301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.125312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.046 qpair failed and we were unable to recover it. 00:31:39.046 [2024-12-07 05:46:42.125574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.125757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.125766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.046 qpair failed and we were unable to recover it. 00:31:39.046 [2024-12-07 05:46:42.126075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.126367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.126377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.046 qpair failed and we were unable to recover it. 00:31:39.046 [2024-12-07 05:46:42.126596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.126885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.126896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.046 qpair failed and we were unable to recover it. 00:31:39.046 [2024-12-07 05:46:42.127185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.127528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.127537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.046 qpair failed and we were unable to recover it. 00:31:39.046 [2024-12-07 05:46:42.127846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.128167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.128178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.046 qpair failed and we were unable to recover it. 00:31:39.046 [2024-12-07 05:46:42.128472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.128692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.128702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.046 qpair failed and we were unable to recover it. 00:31:39.046 [2024-12-07 05:46:42.128867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.129162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.129172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.046 qpair failed and we were unable to recover it. 00:31:39.046 [2024-12-07 05:46:42.129504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.129841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.129851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.046 qpair failed and we were unable to recover it. 00:31:39.046 [2024-12-07 05:46:42.130030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.130363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.130373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.046 qpair failed and we were unable to recover it. 00:31:39.046 [2024-12-07 05:46:42.130685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.131024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.131034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.046 qpair failed and we were unable to recover it. 00:31:39.046 [2024-12-07 05:46:42.131083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.131396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.131406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.046 qpair failed and we were unable to recover it. 00:31:39.046 [2024-12-07 05:46:42.131738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.132061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.132071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.046 qpair failed and we were unable to recover it. 00:31:39.046 [2024-12-07 05:46:42.132381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.132570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.132580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.046 qpair failed and we were unable to recover it. 00:31:39.046 [2024-12-07 05:46:42.132942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.133200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.133210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.046 qpair failed and we were unable to recover it. 00:31:39.046 [2024-12-07 05:46:42.133520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.133842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.133853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.046 qpair failed and we were unable to recover it. 00:31:39.046 [2024-12-07 05:46:42.134045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.134327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.134336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.046 qpair failed and we were unable to recover it. 00:31:39.046 [2024-12-07 05:46:42.134645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.134985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.134994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.046 qpair failed and we were unable to recover it. 00:31:39.046 [2024-12-07 05:46:42.135214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.135513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.135523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.046 qpair failed and we were unable to recover it. 00:31:39.046 [2024-12-07 05:46:42.135774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.136077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.136087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.046 qpair failed and we were unable to recover it. 00:31:39.046 [2024-12-07 05:46:42.136426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.136746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.136756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.046 qpair failed and we were unable to recover it. 00:31:39.046 [2024-12-07 05:46:42.136940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.137283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.046 [2024-12-07 05:46:42.137293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.046 qpair failed and we were unable to recover it. 00:31:39.046 [2024-12-07 05:46:42.137620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.137935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.137944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.047 qpair failed and we were unable to recover it. 00:31:39.047 [2024-12-07 05:46:42.138257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.138571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.138581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.047 qpair failed and we were unable to recover it. 00:31:39.047 [2024-12-07 05:46:42.138919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.139166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.139176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.047 qpair failed and we were unable to recover it. 00:31:39.047 [2024-12-07 05:46:42.139368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.139665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.139675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.047 qpair failed and we were unable to recover it. 00:31:39.047 [2024-12-07 05:46:42.140066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.140296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.140306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.047 qpair failed and we were unable to recover it. 00:31:39.047 [2024-12-07 05:46:42.140612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.140928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.140938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.047 qpair failed and we were unable to recover it. 00:31:39.047 [2024-12-07 05:46:42.141240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.141578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.141588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.047 qpair failed and we were unable to recover it. 00:31:39.047 [2024-12-07 05:46:42.141886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.142162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.142172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.047 qpair failed and we were unable to recover it. 00:31:39.047 [2024-12-07 05:46:42.142496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.142815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.142825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.047 qpair failed and we were unable to recover it. 00:31:39.047 [2024-12-07 05:46:42.143148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.143454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.143464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.047 qpair failed and we were unable to recover it. 00:31:39.047 [2024-12-07 05:46:42.143747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.144101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.144112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.047 qpair failed and we were unable to recover it. 00:31:39.047 [2024-12-07 05:46:42.144421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.144466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.144475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.047 qpair failed and we were unable to recover it. 00:31:39.047 [2024-12-07 05:46:42.144785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.145123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.145134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.047 qpair failed and we were unable to recover it. 00:31:39.047 [2024-12-07 05:46:42.145304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.145490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.145500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.047 qpair failed and we were unable to recover it. 00:31:39.047 [2024-12-07 05:46:42.145853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.146100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.146110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.047 qpair failed and we were unable to recover it. 00:31:39.047 [2024-12-07 05:46:42.146196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.146486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.146495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.047 qpair failed and we were unable to recover it. 00:31:39.047 [2024-12-07 05:46:42.146808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.147128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.147139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.047 qpair failed and we were unable to recover it. 00:31:39.047 [2024-12-07 05:46:42.147469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.147773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.147783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.047 qpair failed and we were unable to recover it. 00:31:39.047 [2024-12-07 05:46:42.148065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.148380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.148391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.047 qpair failed and we were unable to recover it. 00:31:39.047 [2024-12-07 05:46:42.148639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.148773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.148783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.047 qpair failed and we were unable to recover it. 00:31:39.047 [2024-12-07 05:46:42.149115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.149195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.149206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.047 qpair failed and we were unable to recover it. 00:31:39.047 [2024-12-07 05:46:42.149486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.149809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.149818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.047 qpair failed and we were unable to recover it. 00:31:39.047 [2024-12-07 05:46:42.150155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.150323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.150332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.047 qpair failed and we were unable to recover it. 00:31:39.047 [2024-12-07 05:46:42.150642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.150981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.150991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.047 qpair failed and we were unable to recover it. 00:31:39.047 [2024-12-07 05:46:42.151306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.151493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.151503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.047 qpair failed and we were unable to recover it. 00:31:39.047 [2024-12-07 05:46:42.151686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.151998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.152007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.047 qpair failed and we were unable to recover it. 00:31:39.047 [2024-12-07 05:46:42.152337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.152527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.152537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.047 qpair failed and we were unable to recover it. 00:31:39.047 [2024-12-07 05:46:42.152838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.153009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.153024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.047 qpair failed and we were unable to recover it. 00:31:39.047 [2024-12-07 05:46:42.153333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.047 [2024-12-07 05:46:42.153674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.153684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.048 qpair failed and we were unable to recover it. 00:31:39.048 [2024-12-07 05:46:42.153996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.154326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.154336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.048 qpair failed and we were unable to recover it. 00:31:39.048 [2024-12-07 05:46:42.154663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.154957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.154967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.048 qpair failed and we were unable to recover it. 00:31:39.048 [2024-12-07 05:46:42.155308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.155489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.155499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.048 qpair failed and we were unable to recover it. 00:31:39.048 [2024-12-07 05:46:42.155679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.156100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.156114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.048 qpair failed and we were unable to recover it. 00:31:39.048 [2024-12-07 05:46:42.156287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.156588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.156598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.048 qpair failed and we were unable to recover it. 00:31:39.048 [2024-12-07 05:46:42.156755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.156923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.156933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.048 qpair failed and we were unable to recover it. 00:31:39.048 [2024-12-07 05:46:42.157091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.157393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.157403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.048 qpair failed and we were unable to recover it. 00:31:39.048 [2024-12-07 05:46:42.157596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.157899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.157909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.048 qpair failed and we were unable to recover it. 00:31:39.048 [2024-12-07 05:46:42.158244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.158394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.158404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.048 qpair failed and we were unable to recover it. 00:31:39.048 [2024-12-07 05:46:42.158492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.158773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.158783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.048 qpair failed and we were unable to recover it. 00:31:39.048 [2024-12-07 05:46:42.158974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.159251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.159262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.048 qpair failed and we were unable to recover it. 00:31:39.048 [2024-12-07 05:46:42.159556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.159891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.159902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.048 qpair failed and we were unable to recover it. 00:31:39.048 [2024-12-07 05:46:42.160086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.160310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.160319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.048 qpair failed and we were unable to recover it. 00:31:39.048 [2024-12-07 05:46:42.160393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.160670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.160679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.048 qpair failed and we were unable to recover it. 00:31:39.048 [2024-12-07 05:46:42.160859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.161038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.161048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.048 qpair failed and we were unable to recover it. 00:31:39.048 [2024-12-07 05:46:42.161337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.161660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.161670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.048 qpair failed and we were unable to recover it. 00:31:39.048 [2024-12-07 05:46:42.161839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.162143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.162154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.048 qpair failed and we were unable to recover it. 00:31:39.048 [2024-12-07 05:46:42.162467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.162734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.162743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.048 qpair failed and we were unable to recover it. 00:31:39.048 [2024-12-07 05:46:42.163053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.163243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.163253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.048 qpair failed and we were unable to recover it. 00:31:39.048 [2024-12-07 05:46:42.163577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.163914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.163925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.048 qpair failed and we were unable to recover it. 00:31:39.048 [2024-12-07 05:46:42.164225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.164527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.164537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.048 qpair failed and we were unable to recover it. 00:31:39.048 [2024-12-07 05:46:42.164832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.164904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.164913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.048 qpair failed and we were unable to recover it. 00:31:39.048 [2024-12-07 05:46:42.165238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.165552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.165562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.048 qpair failed and we were unable to recover it. 00:31:39.048 [2024-12-07 05:46:42.165881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.166179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.166190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.048 qpair failed and we were unable to recover it. 00:31:39.048 [2024-12-07 05:46:42.166421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.166602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.166612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.048 qpair failed and we were unable to recover it. 00:31:39.048 [2024-12-07 05:46:42.166920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.167241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.167251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.048 qpair failed and we were unable to recover it. 00:31:39.048 [2024-12-07 05:46:42.167529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.048 [2024-12-07 05:46:42.167912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.167922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.049 qpair failed and we were unable to recover it. 00:31:39.049 [2024-12-07 05:46:42.167971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.168305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.168315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.049 qpair failed and we were unable to recover it. 00:31:39.049 [2024-12-07 05:46:42.168473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.168658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.168668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.049 qpair failed and we were unable to recover it. 00:31:39.049 [2024-12-07 05:46:42.168857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.169082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.169093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.049 qpair failed and we were unable to recover it. 00:31:39.049 [2024-12-07 05:46:42.169424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.169737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.169747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.049 qpair failed and we were unable to recover it. 00:31:39.049 [2024-12-07 05:46:42.169944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.170236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.170247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.049 qpair failed and we were unable to recover it. 00:31:39.049 [2024-12-07 05:46:42.170569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.170716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.170726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.049 qpair failed and we were unable to recover it. 00:31:39.049 [2024-12-07 05:46:42.170964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.171133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.171144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.049 qpair failed and we were unable to recover it. 00:31:39.049 [2024-12-07 05:46:42.171423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.171732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.171742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.049 qpair failed and we were unable to recover it. 00:31:39.049 [2024-12-07 05:46:42.172051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.172273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.172282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.049 qpair failed and we were unable to recover it. 00:31:39.049 [2024-12-07 05:46:42.172473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.172805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.172815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.049 qpair failed and we were unable to recover it. 00:31:39.049 [2024-12-07 05:46:42.172988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.173333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.173343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.049 qpair failed and we were unable to recover it. 00:31:39.049 [2024-12-07 05:46:42.173658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.173842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.173851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.049 qpair failed and we were unable to recover it. 00:31:39.049 [2024-12-07 05:46:42.174036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.174318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.174328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.049 qpair failed and we were unable to recover it. 00:31:39.049 [2024-12-07 05:46:42.174512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.174694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.174704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.049 qpair failed and we were unable to recover it. 00:31:39.049 [2024-12-07 05:46:42.174869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.174917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.174927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.049 qpair failed and we were unable to recover it. 00:31:39.049 [2024-12-07 05:46:42.175119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.175466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.175475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.049 qpair failed and we were unable to recover it. 00:31:39.049 [2024-12-07 05:46:42.175789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.175971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.175980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.049 qpair failed and we were unable to recover it. 00:31:39.049 [2024-12-07 05:46:42.176290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.176592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.176603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.049 qpair failed and we were unable to recover it. 00:31:39.049 [2024-12-07 05:46:42.176920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.177249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.177259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.049 qpair failed and we were unable to recover it. 00:31:39.049 [2024-12-07 05:46:42.177572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.177754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.177764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.049 qpair failed and we were unable to recover it. 00:31:39.049 [2024-12-07 05:46:42.178097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.178382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.178392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.049 qpair failed and we were unable to recover it. 00:31:39.049 [2024-12-07 05:46:42.178699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.178893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.178902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.049 qpair failed and we were unable to recover it. 00:31:39.049 [2024-12-07 05:46:42.179089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.179383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.179393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.049 qpair failed and we were unable to recover it. 00:31:39.049 [2024-12-07 05:46:42.179692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.180029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.180039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.049 qpair failed and we were unable to recover it. 00:31:39.049 [2024-12-07 05:46:42.180327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.180636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.180645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.049 qpair failed and we were unable to recover it. 00:31:39.049 [2024-12-07 05:46:42.180975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.181132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.181142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.049 qpair failed and we were unable to recover it. 00:31:39.049 [2024-12-07 05:46:42.181384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.181587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.181597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.049 qpair failed and we were unable to recover it. 00:31:39.049 [2024-12-07 05:46:42.181951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.049 [2024-12-07 05:46:42.182270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.182283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.050 qpair failed and we were unable to recover it. 00:31:39.050 [2024-12-07 05:46:42.182332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.182494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.182503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.050 qpair failed and we were unable to recover it. 00:31:39.050 [2024-12-07 05:46:42.182697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.183019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.183029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.050 qpair failed and we were unable to recover it. 00:31:39.050 [2024-12-07 05:46:42.183351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.183410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.183422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.050 qpair failed and we were unable to recover it. 00:31:39.050 [2024-12-07 05:46:42.183698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.183976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.183987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.050 qpair failed and we were unable to recover it. 00:31:39.050 [2024-12-07 05:46:42.184267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.184436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.184445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.050 qpair failed and we were unable to recover it. 00:31:39.050 [2024-12-07 05:46:42.184787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.185004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.185017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.050 qpair failed and we were unable to recover it. 00:31:39.050 [2024-12-07 05:46:42.185354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.185682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.185692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.050 qpair failed and we were unable to recover it. 00:31:39.050 [2024-12-07 05:46:42.186014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.186177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.186187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.050 qpair failed and we were unable to recover it. 00:31:39.050 [2024-12-07 05:46:42.186502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.186851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.186860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.050 qpair failed and we were unable to recover it. 00:31:39.050 [2024-12-07 05:46:42.187183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.187557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.187567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.050 qpair failed and we were unable to recover it. 00:31:39.050 [2024-12-07 05:46:42.187888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.188172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.188183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.050 qpair failed and we were unable to recover it. 00:31:39.050 [2024-12-07 05:46:42.188386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.188716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.188726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.050 qpair failed and we were unable to recover it. 00:31:39.050 [2024-12-07 05:46:42.189042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.189248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.189257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.050 qpair failed and we were unable to recover it. 00:31:39.050 [2024-12-07 05:46:42.189438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.189765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.189775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.050 qpair failed and we were unable to recover it. 00:31:39.050 [2024-12-07 05:46:42.190109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.190409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.190419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.050 qpair failed and we were unable to recover it. 00:31:39.050 [2024-12-07 05:46:42.190589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.190751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.190761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.050 qpair failed and we were unable to recover it. 00:31:39.050 [2024-12-07 05:46:42.190959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.191344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.191354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.050 qpair failed and we were unable to recover it. 00:31:39.050 [2024-12-07 05:46:42.191532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.191724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.191734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.050 qpair failed and we were unable to recover it. 00:31:39.050 [2024-12-07 05:46:42.191844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.192107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.192118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.050 qpair failed and we were unable to recover it. 00:31:39.050 [2024-12-07 05:46:42.192426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.192738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.192748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.050 qpair failed and we were unable to recover it. 00:31:39.050 [2024-12-07 05:46:42.193065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.193405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.193415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.050 qpair failed and we were unable to recover it. 00:31:39.050 [2024-12-07 05:46:42.193769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.194111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.194121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.050 qpair failed and we were unable to recover it. 00:31:39.050 [2024-12-07 05:46:42.194436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.194746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.050 [2024-12-07 05:46:42.194756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.050 qpair failed and we were unable to recover it. 00:31:39.050 [2024-12-07 05:46:42.194934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.195233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.195243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.051 qpair failed and we were unable to recover it. 00:31:39.051 [2024-12-07 05:46:42.195556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.195885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.195895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.051 qpair failed and we were unable to recover it. 00:31:39.051 [2024-12-07 05:46:42.196264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.196559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.196568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.051 qpair failed and we were unable to recover it. 00:31:39.051 [2024-12-07 05:46:42.196749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.196924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.196934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.051 qpair failed and we were unable to recover it. 00:31:39.051 [2024-12-07 05:46:42.197246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.197425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.197435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.051 qpair failed and we were unable to recover it. 00:31:39.051 [2024-12-07 05:46:42.197482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.197752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.197761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.051 qpair failed and we were unable to recover it. 00:31:39.051 [2024-12-07 05:46:42.198103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.198428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.198437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.051 qpair failed and we were unable to recover it. 00:31:39.051 [2024-12-07 05:46:42.198636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.198931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.198941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.051 qpair failed and we were unable to recover it. 00:31:39.051 [2024-12-07 05:46:42.198982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.199027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.199036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.051 qpair failed and we were unable to recover it. 00:31:39.051 [2024-12-07 05:46:42.199259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.199588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.199598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.051 qpair failed and we were unable to recover it. 00:31:39.051 [2024-12-07 05:46:42.199834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.200026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.200037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.051 qpair failed and we were unable to recover it. 00:31:39.051 [2024-12-07 05:46:42.200368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.200548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.200558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.051 qpair failed and we were unable to recover it. 00:31:39.051 [2024-12-07 05:46:42.200772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.200953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.200963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.051 qpair failed and we were unable to recover it. 00:31:39.051 [2024-12-07 05:46:42.201299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.201542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.201551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.051 qpair failed and we were unable to recover it. 00:31:39.051 [2024-12-07 05:46:42.201714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.202021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.202031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.051 qpair failed and we were unable to recover it. 00:31:39.051 [2024-12-07 05:46:42.202360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.202644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.202653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.051 qpair failed and we were unable to recover it. 00:31:39.051 [2024-12-07 05:46:42.202840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.202999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.203014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.051 qpair failed and we were unable to recover it. 00:31:39.051 [2024-12-07 05:46:42.203128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.203464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.203478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.051 qpair failed and we were unable to recover it. 00:31:39.051 [2024-12-07 05:46:42.203653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.204002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.204020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.051 qpair failed and we were unable to recover it. 00:31:39.051 [2024-12-07 05:46:42.204337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.204532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.204542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.051 qpair failed and we were unable to recover it. 00:31:39.051 [2024-12-07 05:46:42.204864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.205141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.205151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.051 qpair failed and we were unable to recover it. 00:31:39.051 [2024-12-07 05:46:42.205474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.205795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.205805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.051 qpair failed and we were unable to recover it. 00:31:39.051 [2024-12-07 05:46:42.206122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.206339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.206349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.051 qpair failed and we were unable to recover it. 00:31:39.051 [2024-12-07 05:46:42.206516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.206711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.206720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.051 qpair failed and we were unable to recover it. 00:31:39.051 [2024-12-07 05:46:42.207013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.207201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.207211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.051 qpair failed and we were unable to recover it. 00:31:39.051 [2024-12-07 05:46:42.207509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.207845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.207855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.051 qpair failed and we were unable to recover it. 00:31:39.051 [2024-12-07 05:46:42.208142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.208325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.208335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.051 qpair failed and we were unable to recover it. 00:31:39.051 [2024-12-07 05:46:42.208531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.208854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.208864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.051 qpair failed and we were unable to recover it. 00:31:39.051 [2024-12-07 05:46:42.209153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.051 [2024-12-07 05:46:42.209353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.209363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.052 qpair failed and we were unable to recover it. 00:31:39.052 [2024-12-07 05:46:42.209685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.209961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.209971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.052 qpair failed and we were unable to recover it. 00:31:39.052 [2024-12-07 05:46:42.210260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.210486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.210496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.052 qpair failed and we were unable to recover it. 00:31:39.052 [2024-12-07 05:46:42.210806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.211109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.211119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.052 qpair failed and we were unable to recover it. 00:31:39.052 [2024-12-07 05:46:42.211452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.211621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.211631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.052 qpair failed and we were unable to recover it. 00:31:39.052 [2024-12-07 05:46:42.211814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.212117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.212127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.052 qpair failed and we were unable to recover it. 00:31:39.052 [2024-12-07 05:46:42.212294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.212610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.212620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.052 qpair failed and we were unable to recover it. 00:31:39.052 [2024-12-07 05:46:42.212933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.213109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.213120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.052 qpair failed and we were unable to recover it. 00:31:39.052 [2024-12-07 05:46:42.213286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.213546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.213556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.052 qpair failed and we were unable to recover it. 00:31:39.052 [2024-12-07 05:46:42.213728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.213776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.213786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.052 qpair failed and we were unable to recover it. 00:31:39.052 [2024-12-07 05:46:42.214073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.214392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.214402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.052 qpair failed and we were unable to recover it. 00:31:39.052 [2024-12-07 05:46:42.214460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.214771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.214781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.052 qpair failed and we were unable to recover it. 00:31:39.052 [2024-12-07 05:46:42.214957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.215262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.215272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.052 qpair failed and we were unable to recover it. 00:31:39.052 [2024-12-07 05:46:42.215583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.215920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.215929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.052 qpair failed and we were unable to recover it. 00:31:39.052 [2024-12-07 05:46:42.216223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.216387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.216396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.052 qpair failed and we were unable to recover it. 00:31:39.052 [2024-12-07 05:46:42.216698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.217039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.217049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.052 qpair failed and we were unable to recover it. 00:31:39.052 [2024-12-07 05:46:42.217366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.217655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.217664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.052 qpair failed and we were unable to recover it. 00:31:39.052 [2024-12-07 05:46:42.217989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.218320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.218330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.052 qpair failed and we were unable to recover it. 00:31:39.052 [2024-12-07 05:46:42.218646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.218857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.218866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.052 qpair failed and we were unable to recover it. 00:31:39.052 [2024-12-07 05:46:42.218910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.219090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.219100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.052 qpair failed and we were unable to recover it. 00:31:39.052 [2024-12-07 05:46:42.219406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.219724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.219735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.052 qpair failed and we were unable to recover it. 00:31:39.052 [2024-12-07 05:46:42.219898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.220241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.220253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.052 qpair failed and we were unable to recover it. 00:31:39.052 [2024-12-07 05:46:42.220437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.220725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.220735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.052 qpair failed and we were unable to recover it. 00:31:39.052 [2024-12-07 05:46:42.221046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.221412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.221422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.052 qpair failed and we were unable to recover it. 00:31:39.052 [2024-12-07 05:46:42.221602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.221773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.221783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.052 qpair failed and we were unable to recover it. 00:31:39.052 [2024-12-07 05:46:42.221924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.222257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.222269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.052 qpair failed and we were unable to recover it. 00:31:39.052 [2024-12-07 05:46:42.222467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.222625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.222635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.052 qpair failed and we were unable to recover it. 00:31:39.052 [2024-12-07 05:46:42.222819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.223157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.223167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.052 qpair failed and we were unable to recover it. 00:31:39.052 [2024-12-07 05:46:42.223349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.223627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.052 [2024-12-07 05:46:42.223636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.052 qpair failed and we were unable to recover it. 00:31:39.052 [2024-12-07 05:46:42.223948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.224283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.224293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.053 qpair failed and we were unable to recover it. 00:31:39.053 [2024-12-07 05:46:42.224599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.224934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.224944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.053 qpair failed and we were unable to recover it. 00:31:39.053 [2024-12-07 05:46:42.225252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.225557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.225567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.053 qpair failed and we were unable to recover it. 00:31:39.053 [2024-12-07 05:46:42.225742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.226006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.226025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.053 qpair failed and we were unable to recover it. 00:31:39.053 [2024-12-07 05:46:42.226198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.226521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.226532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.053 qpair failed and we were unable to recover it. 00:31:39.053 [2024-12-07 05:46:42.226841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.227159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.227169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.053 qpair failed and we were unable to recover it. 00:31:39.053 [2024-12-07 05:46:42.227441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.227762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.227771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.053 qpair failed and we were unable to recover it. 00:31:39.053 [2024-12-07 05:46:42.228142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.228388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.228398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.053 qpair failed and we were unable to recover it. 00:31:39.053 [2024-12-07 05:46:42.228565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.228869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.228879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.053 qpair failed and we were unable to recover it. 00:31:39.053 [2024-12-07 05:46:42.229217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.229531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.229541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.053 qpair failed and we were unable to recover it. 00:31:39.053 [2024-12-07 05:46:42.229844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.230020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.230030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.053 qpair failed and we were unable to recover it. 00:31:39.053 [2024-12-07 05:46:42.230200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.230480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.230492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.053 qpair failed and we were unable to recover it. 00:31:39.053 [2024-12-07 05:46:42.230821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.231120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.231131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.053 qpair failed and we were unable to recover it. 00:31:39.053 [2024-12-07 05:46:42.231312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.231507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.231517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.053 qpair failed and we were unable to recover it. 00:31:39.053 [2024-12-07 05:46:42.231723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.232018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.232028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.053 qpair failed and we were unable to recover it. 00:31:39.053 [2024-12-07 05:46:42.232188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.232229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.232238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.053 qpair failed and we were unable to recover it. 00:31:39.053 [2024-12-07 05:46:42.232548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.232862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.232872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.053 qpair failed and we were unable to recover it. 00:31:39.053 [2024-12-07 05:46:42.233055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.233355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.233365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.053 qpair failed and we were unable to recover it. 00:31:39.053 [2024-12-07 05:46:42.233512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.233783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.233793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.053 qpair failed and we were unable to recover it. 00:31:39.053 [2024-12-07 05:46:42.234077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.234264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.234274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.053 qpair failed and we were unable to recover it. 00:31:39.053 [2024-12-07 05:46:42.234613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.234894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.234904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.053 qpair failed and we were unable to recover it. 00:31:39.053 [2024-12-07 05:46:42.235247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.235516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.235527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.053 qpair failed and we were unable to recover it. 00:31:39.053 [2024-12-07 05:46:42.235908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.236212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.236222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.053 qpair failed and we were unable to recover it. 00:31:39.053 [2024-12-07 05:46:42.236518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.236832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.236842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.053 qpair failed and we were unable to recover it. 00:31:39.053 [2024-12-07 05:46:42.237152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.237457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.237468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.053 qpair failed and we were unable to recover it. 00:31:39.053 [2024-12-07 05:46:42.237684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.237995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.238004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.053 qpair failed and we were unable to recover it. 00:31:39.053 [2024-12-07 05:46:42.238212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.238409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.238419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.053 qpair failed and we were unable to recover it. 00:31:39.053 [2024-12-07 05:46:42.238721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.239016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.239026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.053 qpair failed and we were unable to recover it. 00:31:39.053 [2024-12-07 05:46:42.239326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.239643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.053 [2024-12-07 05:46:42.239654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.053 qpair failed and we were unable to recover it. 00:31:39.054 [2024-12-07 05:46:42.239955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.240288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.240298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.054 qpair failed and we were unable to recover it. 00:31:39.054 [2024-12-07 05:46:42.240473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.240764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.240774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.054 qpair failed and we were unable to recover it. 00:31:39.054 [2024-12-07 05:46:42.241149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.241463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.241472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.054 qpair failed and we were unable to recover it. 00:31:39.054 [2024-12-07 05:46:42.241776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.242050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.242060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.054 qpair failed and we were unable to recover it. 00:31:39.054 [2024-12-07 05:46:42.242285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.242323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.242332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.054 qpair failed and we were unable to recover it. 00:31:39.054 [2024-12-07 05:46:42.242616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.242914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.242924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.054 qpair failed and we were unable to recover it. 00:31:39.054 [2024-12-07 05:46:42.243227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.243274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.243283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.054 qpair failed and we were unable to recover it. 00:31:39.054 [2024-12-07 05:46:42.243434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.243722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.243733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.054 qpair failed and we were unable to recover it. 00:31:39.054 [2024-12-07 05:46:42.243918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.244202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.244213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.054 qpair failed and we were unable to recover it. 00:31:39.054 [2024-12-07 05:46:42.244523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.244867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.244877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.054 qpair failed and we were unable to recover it. 00:31:39.054 [2024-12-07 05:46:42.245192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.245542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.245553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.054 qpair failed and we were unable to recover it. 00:31:39.054 [2024-12-07 05:46:42.245950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.246093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.246103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.054 qpair failed and we were unable to recover it. 00:31:39.054 [2024-12-07 05:46:42.246426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.246737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.246747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.054 qpair failed and we were unable to recover it. 00:31:39.054 [2024-12-07 05:46:42.247058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.247368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.247378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.054 qpair failed and we were unable to recover it. 00:31:39.054 [2024-12-07 05:46:42.247659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.247978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.247988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.054 qpair failed and we were unable to recover it. 00:31:39.054 [2024-12-07 05:46:42.248299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.248640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.248650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.054 qpair failed and we were unable to recover it. 00:31:39.054 [2024-12-07 05:46:42.248960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.249154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.249164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.054 qpair failed and we were unable to recover it. 00:31:39.054 [2024-12-07 05:46:42.249290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.249631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.249641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.054 qpair failed and we were unable to recover it. 00:31:39.054 [2024-12-07 05:46:42.249970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.250251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.250262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.054 qpair failed and we were unable to recover it. 00:31:39.054 [2024-12-07 05:46:42.250576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.250757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.250767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.054 qpair failed and we were unable to recover it. 00:31:39.054 [2024-12-07 05:46:42.251076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.251364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.251373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.054 qpair failed and we were unable to recover it. 00:31:39.054 [2024-12-07 05:46:42.251677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.251839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.251849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.054 qpair failed and we were unable to recover it. 00:31:39.054 [2024-12-07 05:46:42.252092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.252282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.252292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.054 qpair failed and we were unable to recover it. 00:31:39.054 [2024-12-07 05:46:42.252604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.252946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.252959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.054 qpair failed and we were unable to recover it. 00:31:39.054 [2024-12-07 05:46:42.253258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.253585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.054 [2024-12-07 05:46:42.253594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.055 qpair failed and we were unable to recover it. 00:31:39.055 [2024-12-07 05:46:42.253900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.254209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.254219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.055 qpair failed and we were unable to recover it. 00:31:39.055 [2024-12-07 05:46:42.254500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.254544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.254554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.055 qpair failed and we were unable to recover it. 00:31:39.055 [2024-12-07 05:46:42.254828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.255184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.255195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.055 qpair failed and we were unable to recover it. 00:31:39.055 [2024-12-07 05:46:42.255378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.255684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.255694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.055 qpair failed and we were unable to recover it. 00:31:39.055 [2024-12-07 05:46:42.255860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.256182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.256192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.055 qpair failed and we were unable to recover it. 00:31:39.055 [2024-12-07 05:46:42.256487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.256657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.256667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.055 qpair failed and we were unable to recover it. 00:31:39.055 [2024-12-07 05:46:42.257018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.257298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.257308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.055 qpair failed and we were unable to recover it. 00:31:39.055 [2024-12-07 05:46:42.257623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.257947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.257957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.055 qpair failed and we were unable to recover it. 00:31:39.055 [2024-12-07 05:46:42.258141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.258478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.258491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.055 qpair failed and we were unable to recover it. 00:31:39.055 [2024-12-07 05:46:42.258835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.259020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.259031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.055 qpair failed and we were unable to recover it. 00:31:39.055 [2024-12-07 05:46:42.259329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.259647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.259657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.055 qpair failed and we were unable to recover it. 00:31:39.055 [2024-12-07 05:46:42.259969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.260306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.260316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.055 qpair failed and we were unable to recover it. 00:31:39.055 [2024-12-07 05:46:42.260623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.260938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.260947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.055 qpair failed and we were unable to recover it. 00:31:39.055 [2024-12-07 05:46:42.261266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.261591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.261602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.055 qpair failed and we were unable to recover it. 00:31:39.055 [2024-12-07 05:46:42.261765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.261998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.262007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.055 qpair failed and we were unable to recover it. 00:31:39.055 [2024-12-07 05:46:42.262313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.262359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.262368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.055 qpair failed and we were unable to recover it. 00:31:39.055 [2024-12-07 05:46:42.262519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.262701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.262711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.055 qpair failed and we were unable to recover it. 00:31:39.055 [2024-12-07 05:46:42.263033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.263189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.055 [2024-12-07 05:46:42.263199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.055 qpair failed and we were unable to recover it. 00:31:39.326 [2024-12-07 05:46:42.263365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.263664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.263675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.326 qpair failed and we were unable to recover it. 00:31:39.326 [2024-12-07 05:46:42.263983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.264323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.264333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.326 qpair failed and we were unable to recover it. 00:31:39.326 [2024-12-07 05:46:42.264494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.264798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.264808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.326 qpair failed and we were unable to recover it. 00:31:39.326 [2024-12-07 05:46:42.265128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.265400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.265409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.326 qpair failed and we were unable to recover it. 00:31:39.326 [2024-12-07 05:46:42.265714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.266054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.266064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.326 qpair failed and we were unable to recover it. 00:31:39.326 [2024-12-07 05:46:42.266393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.266669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.266678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.326 qpair failed and we were unable to recover it. 00:31:39.326 [2024-12-07 05:46:42.266975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.267292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.267302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.326 qpair failed and we were unable to recover it. 00:31:39.326 [2024-12-07 05:46:42.267589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.267911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.267921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.326 qpair failed and we were unable to recover it. 00:31:39.326 [2024-12-07 05:46:42.268108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.268412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.268422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.326 qpair failed and we were unable to recover it. 00:31:39.326 [2024-12-07 05:46:42.268578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.268864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.268873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.326 qpair failed and we were unable to recover it. 00:31:39.326 [2024-12-07 05:46:42.269191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.269349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.269359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.326 qpair failed and we were unable to recover it. 00:31:39.326 [2024-12-07 05:46:42.269648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.269825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.269835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.326 qpair failed and we were unable to recover it. 00:31:39.326 [2024-12-07 05:46:42.270022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.270314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.270325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.326 qpair failed and we were unable to recover it. 00:31:39.326 [2024-12-07 05:46:42.270655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.270816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.270826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.326 qpair failed and we were unable to recover it. 00:31:39.326 [2024-12-07 05:46:42.271019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.271183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.271192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.326 qpair failed and we were unable to recover it. 00:31:39.326 [2024-12-07 05:46:42.271505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.271831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.271841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.326 qpair failed and we were unable to recover it. 00:31:39.326 [2024-12-07 05:46:42.272017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.272323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.272333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.326 qpair failed and we were unable to recover it. 00:31:39.326 [2024-12-07 05:46:42.272632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.272824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.272833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.326 qpair failed and we were unable to recover it. 00:31:39.326 [2024-12-07 05:46:42.273019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.273327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.273337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.326 qpair failed and we were unable to recover it. 00:31:39.326 [2024-12-07 05:46:42.273663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.273982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.273991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.326 qpair failed and we were unable to recover it. 00:31:39.326 [2024-12-07 05:46:42.274162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.274368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.274378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.326 qpair failed and we were unable to recover it. 00:31:39.326 [2024-12-07 05:46:42.274704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.275023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.275034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.326 qpair failed and we were unable to recover it. 00:31:39.326 [2024-12-07 05:46:42.275362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.275690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.275699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.326 qpair failed and we were unable to recover it. 00:31:39.326 [2024-12-07 05:46:42.276036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.276367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.276377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.326 qpair failed and we were unable to recover it. 00:31:39.326 [2024-12-07 05:46:42.276566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.276882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.276891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.326 qpair failed and we were unable to recover it. 00:31:39.326 [2024-12-07 05:46:42.277143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.277452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.326 [2024-12-07 05:46:42.277462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.326 qpair failed and we were unable to recover it. 00:31:39.327 [2024-12-07 05:46:42.277776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.278081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.278091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.327 qpair failed and we were unable to recover it. 00:31:39.327 [2024-12-07 05:46:42.278429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.278742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.278753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.327 qpair failed and we were unable to recover it. 00:31:39.327 [2024-12-07 05:46:42.279078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.279410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.279421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.327 qpair failed and we were unable to recover it. 00:31:39.327 [2024-12-07 05:46:42.279751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.279931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.279940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.327 qpair failed and we were unable to recover it. 00:31:39.327 [2024-12-07 05:46:42.280103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.280410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.280420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.327 qpair failed and we were unable to recover it. 00:31:39.327 [2024-12-07 05:46:42.280720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.280912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.280925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.327 qpair failed and we were unable to recover it. 00:31:39.327 [2024-12-07 05:46:42.281085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.281396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.281406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.327 qpair failed and we were unable to recover it. 00:31:39.327 [2024-12-07 05:46:42.281714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.282040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.282050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.327 qpair failed and we were unable to recover it. 00:31:39.327 [2024-12-07 05:46:42.282364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.282658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.282668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.327 qpair failed and we were unable to recover it. 00:31:39.327 [2024-12-07 05:46:42.282998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.283333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.283344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.327 qpair failed and we were unable to recover it. 00:31:39.327 [2024-12-07 05:46:42.283651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.283962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.283971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.327 qpair failed and we were unable to recover it. 00:31:39.327 [2024-12-07 05:46:42.284302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.284620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.284630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.327 qpair failed and we were unable to recover it. 00:31:39.327 [2024-12-07 05:46:42.284939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.285247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.285257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.327 qpair failed and we were unable to recover it. 00:31:39.327 [2024-12-07 05:46:42.285447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.285763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.285773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.327 qpair failed and we were unable to recover it. 00:31:39.327 [2024-12-07 05:46:42.285946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.286209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.286219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.327 qpair failed and we were unable to recover it. 00:31:39.327 [2024-12-07 05:46:42.286533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.286852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.286862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.327 qpair failed and we were unable to recover it. 00:31:39.327 [2024-12-07 05:46:42.287104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.287152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.287163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.327 qpair failed and we were unable to recover it. 00:31:39.327 [2024-12-07 05:46:42.287469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.287668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.287678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.327 qpair failed and we were unable to recover it. 00:31:39.327 [2024-12-07 05:46:42.288009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.288339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.288349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.327 qpair failed and we were unable to recover it. 00:31:39.327 [2024-12-07 05:46:42.288660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.288980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.288990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.327 qpair failed and we were unable to recover it. 00:31:39.327 [2024-12-07 05:46:42.289263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.289581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.289591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.327 qpair failed and we were unable to recover it. 00:31:39.327 [2024-12-07 05:46:42.289750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.290044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.290055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.327 qpair failed and we were unable to recover it. 00:31:39.327 [2024-12-07 05:46:42.290379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.290682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.290691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.327 qpair failed and we were unable to recover it. 00:31:39.327 [2024-12-07 05:46:42.291039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.291352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.291362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.327 qpair failed and we were unable to recover it. 00:31:39.327 [2024-12-07 05:46:42.291694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.292030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.292041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.327 qpair failed and we were unable to recover it. 00:31:39.327 [2024-12-07 05:46:42.292355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.292517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.292527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.327 qpair failed and we were unable to recover it. 00:31:39.327 [2024-12-07 05:46:42.292727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.293048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.293059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.327 qpair failed and we were unable to recover it. 00:31:39.327 [2024-12-07 05:46:42.293375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.293690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.293700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.327 qpair failed and we were unable to recover it. 00:31:39.327 [2024-12-07 05:46:42.293883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.327 [2024-12-07 05:46:42.294065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.294075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.328 qpair failed and we were unable to recover it. 00:31:39.328 [2024-12-07 05:46:42.294362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.294678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.294687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.328 qpair failed and we were unable to recover it. 00:31:39.328 [2024-12-07 05:46:42.295006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.295324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.295334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.328 qpair failed and we were unable to recover it. 00:31:39.328 [2024-12-07 05:46:42.295612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.295786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.295797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.328 qpair failed and we were unable to recover it. 00:31:39.328 [2024-12-07 05:46:42.295847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.296175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.296186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.328 qpair failed and we were unable to recover it. 00:31:39.328 [2024-12-07 05:46:42.296375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.296639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.296649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.328 qpair failed and we were unable to recover it. 00:31:39.328 [2024-12-07 05:46:42.296966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.297161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.297171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.328 qpair failed and we were unable to recover it. 00:31:39.328 [2024-12-07 05:46:42.297467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.297808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.297818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.328 qpair failed and we were unable to recover it. 00:31:39.328 [2024-12-07 05:46:42.298149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.298467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.298477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.328 qpair failed and we were unable to recover it. 00:31:39.328 [2024-12-07 05:46:42.298808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.299128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.299138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.328 qpair failed and we were unable to recover it. 00:31:39.328 [2024-12-07 05:46:42.299527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.299817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.299828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.328 qpair failed and we were unable to recover it. 00:31:39.328 [2024-12-07 05:46:42.300142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.300466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.300476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.328 qpair failed and we were unable to recover it. 00:31:39.328 [2024-12-07 05:46:42.300672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.300987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.300996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.328 qpair failed and we were unable to recover it. 00:31:39.328 [2024-12-07 05:46:42.301047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.301196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.301205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.328 qpair failed and we were unable to recover it. 00:31:39.328 [2024-12-07 05:46:42.301513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.301673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.301683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.328 qpair failed and we were unable to recover it. 00:31:39.328 [2024-12-07 05:46:42.301947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.302091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.302101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.328 qpair failed and we were unable to recover it. 00:31:39.328 [2024-12-07 05:46:42.302486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.302803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.302813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.328 qpair failed and we were unable to recover it. 00:31:39.328 [2024-12-07 05:46:42.303144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.303318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.303327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.328 qpair failed and we were unable to recover it. 00:31:39.328 [2024-12-07 05:46:42.303527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.303787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.303797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.328 qpair failed and we were unable to recover it. 00:31:39.328 [2024-12-07 05:46:42.303968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.304313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.304324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.328 qpair failed and we were unable to recover it. 00:31:39.328 [2024-12-07 05:46:42.304633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.304953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.304963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.328 qpair failed and we were unable to recover it. 00:31:39.328 [2024-12-07 05:46:42.305277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.305470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.305479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.328 qpair failed and we were unable to recover it. 00:31:39.328 [2024-12-07 05:46:42.305808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.306093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.306103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.328 qpair failed and we were unable to recover it. 00:31:39.328 [2024-12-07 05:46:42.306402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.306583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.306593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.328 qpair failed and we were unable to recover it. 00:31:39.328 [2024-12-07 05:46:42.306927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.307138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.307148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.328 qpair failed and we were unable to recover it. 00:31:39.328 [2024-12-07 05:46:42.307332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.307657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.307667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.328 qpair failed and we were unable to recover it. 00:31:39.328 [2024-12-07 05:46:42.307709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.308017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.308027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.328 qpair failed and we were unable to recover it. 00:31:39.328 [2024-12-07 05:46:42.308210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.308528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.308538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.328 qpair failed and we were unable to recover it. 00:31:39.328 [2024-12-07 05:46:42.308851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.309170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.328 [2024-12-07 05:46:42.309183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.328 qpair failed and we were unable to recover it. 00:31:39.328 [2024-12-07 05:46:42.309334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.309606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.309616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.329 qpair failed and we were unable to recover it. 00:31:39.329 [2024-12-07 05:46:42.309777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.310046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.310056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.329 qpair failed and we were unable to recover it. 00:31:39.329 [2024-12-07 05:46:42.310232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.310496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.310505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.329 qpair failed and we were unable to recover it. 00:31:39.329 [2024-12-07 05:46:42.310677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.310936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.310946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.329 qpair failed and we were unable to recover it. 00:31:39.329 [2024-12-07 05:46:42.311246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.311411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.311421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.329 qpair failed and we were unable to recover it. 00:31:39.329 [2024-12-07 05:46:42.311753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.312092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.312102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.329 qpair failed and we were unable to recover it. 00:31:39.329 [2024-12-07 05:46:42.312441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.312784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.312794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.329 qpair failed and we were unable to recover it. 00:31:39.329 [2024-12-07 05:46:42.313097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.313260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.313270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.329 qpair failed and we were unable to recover it. 00:31:39.329 [2024-12-07 05:46:42.313606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.313767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.313777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.329 qpair failed and we were unable to recover it. 00:31:39.329 [2024-12-07 05:46:42.314073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.314453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.314463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.329 qpair failed and we were unable to recover it. 00:31:39.329 [2024-12-07 05:46:42.314765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.315082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.315092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.329 qpair failed and we were unable to recover it. 00:31:39.329 [2024-12-07 05:46:42.315405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.315692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.315702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.329 qpair failed and we were unable to recover it. 00:31:39.329 [2024-12-07 05:46:42.315901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.316123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.316134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.329 qpair failed and we were unable to recover it. 00:31:39.329 [2024-12-07 05:46:42.316480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.316665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.316675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.329 qpair failed and we were unable to recover it. 00:31:39.329 [2024-12-07 05:46:42.316947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.317260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.317271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.329 qpair failed and we were unable to recover it. 00:31:39.329 [2024-12-07 05:46:42.317452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.317756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.317767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.329 qpair failed and we were unable to recover it. 00:31:39.329 [2024-12-07 05:46:42.318080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.318399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.318410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.329 qpair failed and we were unable to recover it. 00:31:39.329 [2024-12-07 05:46:42.318581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.318919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.318929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.329 qpair failed and we were unable to recover it. 00:31:39.329 [2024-12-07 05:46:42.319110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.319446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.319457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.329 qpair failed and we were unable to recover it. 00:31:39.329 [2024-12-07 05:46:42.319728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.319896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.319907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.329 qpair failed and we were unable to recover it. 00:31:39.329 [2024-12-07 05:46:42.320230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.320405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.320415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.329 qpair failed and we were unable to recover it. 00:31:39.329 [2024-12-07 05:46:42.320738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.321045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.321055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.329 qpair failed and we were unable to recover it. 00:31:39.329 [2024-12-07 05:46:42.321377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.321717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.321727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.329 qpair failed and we were unable to recover it. 00:31:39.329 [2024-12-07 05:46:42.322030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.322169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.322179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.329 qpair failed and we were unable to recover it. 00:31:39.329 [2024-12-07 05:46:42.322489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.322830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.322840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.329 qpair failed and we were unable to recover it. 00:31:39.329 [2024-12-07 05:46:42.323155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.323197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.323206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.329 qpair failed and we were unable to recover it. 00:31:39.329 [2024-12-07 05:46:42.323482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.323803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.323814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.329 qpair failed and we were unable to recover it. 00:31:39.329 [2024-12-07 05:46:42.324123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.324439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.324450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.329 qpair failed and we were unable to recover it. 00:31:39.329 [2024-12-07 05:46:42.324782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.324823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.329 [2024-12-07 05:46:42.324833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.329 qpair failed and we were unable to recover it. 00:31:39.330 [2024-12-07 05:46:42.325032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.325333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.325344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.330 qpair failed and we were unable to recover it. 00:31:39.330 [2024-12-07 05:46:42.325632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.325792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.325802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.330 qpair failed and we were unable to recover it. 00:31:39.330 [2024-12-07 05:46:42.325852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.326141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.326152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.330 qpair failed and we were unable to recover it. 00:31:39.330 [2024-12-07 05:46:42.326446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.326640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.326651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.330 qpair failed and we were unable to recover it. 00:31:39.330 [2024-12-07 05:46:42.326967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.327295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.327306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.330 qpair failed and we were unable to recover it. 00:31:39.330 [2024-12-07 05:46:42.327609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.327901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.327912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.330 qpair failed and we were unable to recover it. 00:31:39.330 [2024-12-07 05:46:42.328239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.328557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.328567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.330 qpair failed and we were unable to recover it. 00:31:39.330 [2024-12-07 05:46:42.328848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.329133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.329144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.330 qpair failed and we were unable to recover it. 00:31:39.330 [2024-12-07 05:46:42.329527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.329820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.329830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.330 qpair failed and we were unable to recover it. 00:31:39.330 [2024-12-07 05:46:42.330143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.330333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.330343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.330 qpair failed and we were unable to recover it. 00:31:39.330 [2024-12-07 05:46:42.330522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.330840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.330850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.330 qpair failed and we were unable to recover it. 00:31:39.330 [2024-12-07 05:46:42.331186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.331345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.331358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.330 qpair failed and we were unable to recover it. 00:31:39.330 [2024-12-07 05:46:42.331544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.331855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.331864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.330 qpair failed and we were unable to recover it. 00:31:39.330 [2024-12-07 05:46:42.332176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.332482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.332492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.330 qpair failed and we were unable to recover it. 00:31:39.330 [2024-12-07 05:46:42.332678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.332984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.332995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.330 qpair failed and we were unable to recover it. 00:31:39.330 [2024-12-07 05:46:42.333180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.333493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.333503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.330 qpair failed and we were unable to recover it. 00:31:39.330 [2024-12-07 05:46:42.333675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.333862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.333872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.330 qpair failed and we were unable to recover it. 00:31:39.330 [2024-12-07 05:46:42.334044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.334369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.334379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.330 qpair failed and we were unable to recover it. 00:31:39.330 [2024-12-07 05:46:42.334686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.335027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.335040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.330 qpair failed and we were unable to recover it. 00:31:39.330 [2024-12-07 05:46:42.335353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.335671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.335681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.330 qpair failed and we were unable to recover it. 00:31:39.330 [2024-12-07 05:46:42.335990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.336312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.336322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.330 qpair failed and we were unable to recover it. 00:31:39.330 [2024-12-07 05:46:42.336621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.336946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.336959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.330 qpair failed and we were unable to recover it. 00:31:39.330 [2024-12-07 05:46:42.337272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.337585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.337595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.330 qpair failed and we were unable to recover it. 00:31:39.330 [2024-12-07 05:46:42.337777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.330 [2024-12-07 05:46:42.338090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.338100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.331 qpair failed and we were unable to recover it. 00:31:39.331 [2024-12-07 05:46:42.338409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.338729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.338739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.331 qpair failed and we were unable to recover it. 00:31:39.331 [2024-12-07 05:46:42.338911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.339260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.339271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.331 qpair failed and we were unable to recover it. 00:31:39.331 [2024-12-07 05:46:42.339456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.339755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.339765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.331 qpair failed and we were unable to recover it. 00:31:39.331 [2024-12-07 05:46:42.340130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.340444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.340454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.331 qpair failed and we were unable to recover it. 00:31:39.331 [2024-12-07 05:46:42.340761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.341078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.341089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.331 qpair failed and we were unable to recover it. 00:31:39.331 [2024-12-07 05:46:42.341421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.341761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.341771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.331 qpair failed and we were unable to recover it. 00:31:39.331 [2024-12-07 05:46:42.341983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.342259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.342269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.331 qpair failed and we were unable to recover it. 00:31:39.331 [2024-12-07 05:46:42.342563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.342746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.342756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.331 qpair failed and we were unable to recover it. 00:31:39.331 [2024-12-07 05:46:42.342985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.343173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.343183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.331 qpair failed and we were unable to recover it. 00:31:39.331 [2024-12-07 05:46:42.343485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.343824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.343834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.331 qpair failed and we were unable to recover it. 00:31:39.331 [2024-12-07 05:46:42.344149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.344315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.344325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.331 qpair failed and we were unable to recover it. 00:31:39.331 [2024-12-07 05:46:42.344644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.344982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.344992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.331 qpair failed and we were unable to recover it. 00:31:39.331 [2024-12-07 05:46:42.345299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.345506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.345516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.331 qpair failed and we were unable to recover it. 00:31:39.331 [2024-12-07 05:46:42.345918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.346117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.346127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.331 qpair failed and we were unable to recover it. 00:31:39.331 [2024-12-07 05:46:42.346463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.346814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.346824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.331 qpair failed and we were unable to recover it. 00:31:39.331 [2024-12-07 05:46:42.347132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.347313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.347324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.331 qpair failed and we were unable to recover it. 00:31:39.331 [2024-12-07 05:46:42.347503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.347830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.347840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.331 qpair failed and we were unable to recover it. 00:31:39.331 [2024-12-07 05:46:42.348014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.348390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.348401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.331 qpair failed and we were unable to recover it. 00:31:39.331 [2024-12-07 05:46:42.348582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.348875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.348885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.331 qpair failed and we were unable to recover it. 00:31:39.331 [2024-12-07 05:46:42.349170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.349338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.349348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.331 qpair failed and we were unable to recover it. 00:31:39.331 [2024-12-07 05:46:42.349645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.349963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.349973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.331 qpair failed and we were unable to recover it. 00:31:39.331 [2024-12-07 05:46:42.350265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.350427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.350437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.331 qpair failed and we were unable to recover it. 00:31:39.331 [2024-12-07 05:46:42.350745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.351050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.351060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.331 qpair failed and we were unable to recover it. 00:31:39.331 [2024-12-07 05:46:42.351275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.351455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.351465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.331 qpair failed and we were unable to recover it. 00:31:39.331 [2024-12-07 05:46:42.351625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.351893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.351903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.331 qpair failed and we were unable to recover it. 00:31:39.331 [2024-12-07 05:46:42.352178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.352358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.352368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.331 qpair failed and we were unable to recover it. 00:31:39.331 [2024-12-07 05:46:42.352693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.353018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.353029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.331 qpair failed and we were unable to recover it. 00:31:39.331 [2024-12-07 05:46:42.353332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.353500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.353511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.331 qpair failed and we were unable to recover it. 00:31:39.331 [2024-12-07 05:46:42.353845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.331 [2024-12-07 05:46:42.354165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.354176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.332 qpair failed and we were unable to recover it. 00:31:39.332 [2024-12-07 05:46:42.354549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.354841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.354851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.332 qpair failed and we were unable to recover it. 00:31:39.332 [2024-12-07 05:46:42.355084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.355401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.355411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.332 qpair failed and we were unable to recover it. 00:31:39.332 [2024-12-07 05:46:42.355692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.355900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.355910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.332 qpair failed and we were unable to recover it. 00:31:39.332 [2024-12-07 05:46:42.356215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.356540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.356551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.332 qpair failed and we were unable to recover it. 00:31:39.332 [2024-12-07 05:46:42.356861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.357179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.357190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.332 qpair failed and we were unable to recover it. 00:31:39.332 [2024-12-07 05:46:42.357395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.357584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.357594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.332 qpair failed and we were unable to recover it. 00:31:39.332 [2024-12-07 05:46:42.357896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.358223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.358234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.332 qpair failed and we were unable to recover it. 00:31:39.332 [2024-12-07 05:46:42.358563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.358861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.358871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.332 qpair failed and we were unable to recover it. 00:31:39.332 [2024-12-07 05:46:42.359286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.359579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.359588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.332 qpair failed and we were unable to recover it. 00:31:39.332 [2024-12-07 05:46:42.359897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.360221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.360233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.332 qpair failed and we were unable to recover it. 00:31:39.332 [2024-12-07 05:46:42.360566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.360730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.360739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.332 qpair failed and we were unable to recover it. 00:31:39.332 [2024-12-07 05:46:42.360887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.361148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.361159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.332 qpair failed and we were unable to recover it. 00:31:39.332 [2024-12-07 05:46:42.361344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.361647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.361656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.332 qpair failed and we were unable to recover it. 00:31:39.332 [2024-12-07 05:46:42.361812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.362083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.362093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.332 qpair failed and we were unable to recover it. 00:31:39.332 [2024-12-07 05:46:42.362395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.362695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.362705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.332 qpair failed and we were unable to recover it. 00:31:39.332 [2024-12-07 05:46:42.362881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.363148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.363158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.332 qpair failed and we were unable to recover it. 00:31:39.332 [2024-12-07 05:46:42.363464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.363785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.363794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.332 qpair failed and we were unable to recover it. 00:31:39.332 [2024-12-07 05:46:42.364128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.364428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.364438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.332 qpair failed and we were unable to recover it. 00:31:39.332 [2024-12-07 05:46:42.364772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.365060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.365070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.332 qpair failed and we were unable to recover it. 00:31:39.332 [2024-12-07 05:46:42.365281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.365440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.365450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.332 qpair failed and we were unable to recover it. 00:31:39.332 [2024-12-07 05:46:42.365631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.365963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.365972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.332 qpair failed and we were unable to recover it. 00:31:39.332 [2024-12-07 05:46:42.366164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.366458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.366468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.332 qpair failed and we were unable to recover it. 00:31:39.332 [2024-12-07 05:46:42.366677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.367017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.367027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.332 qpair failed and we were unable to recover it. 00:31:39.332 [2024-12-07 05:46:42.367343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.367673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.367683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.332 qpair failed and we were unable to recover it. 00:31:39.332 [2024-12-07 05:46:42.367997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.368390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.368400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.332 qpair failed and we were unable to recover it. 00:31:39.332 [2024-12-07 05:46:42.368683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.368850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.368860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.332 qpair failed and we were unable to recover it. 00:31:39.332 [2024-12-07 05:46:42.369054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.369322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.369332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.332 qpair failed and we were unable to recover it. 00:31:39.332 [2024-12-07 05:46:42.369500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.369773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.369783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.332 qpair failed and we were unable to recover it. 00:31:39.332 [2024-12-07 05:46:42.369957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.332 [2024-12-07 05:46:42.370145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.370155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.333 qpair failed and we were unable to recover it. 00:31:39.333 [2024-12-07 05:46:42.370427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.370639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.370649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.333 qpair failed and we were unable to recover it. 00:31:39.333 [2024-12-07 05:46:42.370962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.371254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.371264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.333 qpair failed and we were unable to recover it. 00:31:39.333 [2024-12-07 05:46:42.371427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.371623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.371632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.333 qpair failed and we were unable to recover it. 00:31:39.333 [2024-12-07 05:46:42.372030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.372343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.372353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.333 qpair failed and we were unable to recover it. 00:31:39.333 [2024-12-07 05:46:42.372679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.373023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.373034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.333 qpair failed and we were unable to recover it. 00:31:39.333 [2024-12-07 05:46:42.373169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.373442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.373451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.333 qpair failed and we were unable to recover it. 00:31:39.333 [2024-12-07 05:46:42.373615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.373902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.373912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.333 qpair failed and we were unable to recover it. 00:31:39.333 [2024-12-07 05:46:42.374308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.374500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.374510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.333 qpair failed and we were unable to recover it. 00:31:39.333 [2024-12-07 05:46:42.374821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.375106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.375116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.333 qpair failed and we were unable to recover it. 00:31:39.333 [2024-12-07 05:46:42.375307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.375465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.375474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.333 qpair failed and we were unable to recover it. 00:31:39.333 [2024-12-07 05:46:42.375650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.375937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.375946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.333 qpair failed and we were unable to recover it. 00:31:39.333 [2024-12-07 05:46:42.376283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.376609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.376618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.333 qpair failed and we were unable to recover it. 00:31:39.333 [2024-12-07 05:46:42.376937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.377230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.377240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.333 qpair failed and we were unable to recover it. 00:31:39.333 [2024-12-07 05:46:42.377576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.377892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.377902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.333 qpair failed and we were unable to recover it. 00:31:39.333 [2024-12-07 05:46:42.378166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.378348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.378359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.333 qpair failed and we were unable to recover it. 00:31:39.333 [2024-12-07 05:46:42.378422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.378611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.378620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.333 qpair failed and we were unable to recover it. 00:31:39.333 [2024-12-07 05:46:42.378905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.379089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.379098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.333 qpair failed and we were unable to recover it. 00:31:39.333 [2024-12-07 05:46:42.379437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.379742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.379752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.333 qpair failed and we were unable to recover it. 00:31:39.333 [2024-12-07 05:46:42.379920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.380081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.380091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.333 qpair failed and we were unable to recover it. 00:31:39.333 [2024-12-07 05:46:42.380378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.380715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.380724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.333 qpair failed and we were unable to recover it. 00:31:39.333 [2024-12-07 05:46:42.380894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.381222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.381232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.333 qpair failed and we were unable to recover it. 00:31:39.333 [2024-12-07 05:46:42.381565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.381910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.381920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.333 qpair failed and we were unable to recover it. 00:31:39.333 [2024-12-07 05:46:42.382277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.382598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.382607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.333 qpair failed and we were unable to recover it. 00:31:39.333 [2024-12-07 05:46:42.382944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.383125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.383135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.333 qpair failed and we were unable to recover it. 00:31:39.333 [2024-12-07 05:46:42.383309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.383664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.383674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.333 qpair failed and we were unable to recover it. 00:31:39.333 [2024-12-07 05:46:42.384017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.384175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.384184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.333 qpair failed and we were unable to recover it. 00:31:39.333 [2024-12-07 05:46:42.384358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.384638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.384648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.333 qpair failed and we were unable to recover it. 00:31:39.333 [2024-12-07 05:46:42.384958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.333 [2024-12-07 05:46:42.385121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.385131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.334 qpair failed and we were unable to recover it. 00:31:39.334 [2024-12-07 05:46:42.385443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.385625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.385636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.334 qpair failed and we were unable to recover it. 00:31:39.334 [2024-12-07 05:46:42.385825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.386088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.386099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.334 qpair failed and we were unable to recover it. 00:31:39.334 [2024-12-07 05:46:42.386412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.386731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.386742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.334 qpair failed and we were unable to recover it. 00:31:39.334 [2024-12-07 05:46:42.386947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.387231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.387244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.334 qpair failed and we were unable to recover it. 00:31:39.334 [2024-12-07 05:46:42.387606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.387868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.387878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.334 qpair failed and we were unable to recover it. 00:31:39.334 [2024-12-07 05:46:42.388178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.388496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.388505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.334 qpair failed and we were unable to recover it. 00:31:39.334 [2024-12-07 05:46:42.388818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.388981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.388991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.334 qpair failed and we were unable to recover it. 00:31:39.334 [2024-12-07 05:46:42.389275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.389611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.389621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.334 qpair failed and we were unable to recover it. 00:31:39.334 [2024-12-07 05:46:42.389934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.390246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.390256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.334 qpair failed and we were unable to recover it. 00:31:39.334 [2024-12-07 05:46:42.390386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.390700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.390710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.334 qpair failed and we were unable to recover it. 00:31:39.334 [2024-12-07 05:46:42.391029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.391352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.391362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.334 qpair failed and we were unable to recover it. 00:31:39.334 [2024-12-07 05:46:42.391545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 05:46:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:39.334 [2024-12-07 05:46:42.391896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.391911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.334 qpair failed and we were unable to recover it. 00:31:39.334 [2024-12-07 05:46:42.392139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 05:46:42 -- common/autotest_common.sh@862 -- # return 0 00:31:39.334 [2024-12-07 05:46:42.392401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.392412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.334 qpair failed and we were unable to recover it. 00:31:39.334 05:46:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:39.334 [2024-12-07 05:46:42.392590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 05:46:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:39.334 [2024-12-07 05:46:42.392882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.392893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.334 qpair failed and we were unable to recover it. 00:31:39.334 05:46:42 -- common/autotest_common.sh@10 -- # set +x 00:31:39.334 [2024-12-07 05:46:42.393113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.393157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.393167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.334 qpair failed and we were unable to recover it. 00:31:39.334 [2024-12-07 05:46:42.393315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.393630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.393640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.334 qpair failed and we were unable to recover it. 00:31:39.334 [2024-12-07 05:46:42.393949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.394251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.394261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.334 qpair failed and we were unable to recover it. 00:31:39.334 [2024-12-07 05:46:42.394544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.394860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.394871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.334 qpair failed and we were unable to recover it. 00:31:39.334 [2024-12-07 05:46:42.395178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.395365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.395375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.334 qpair failed and we were unable to recover it. 00:31:39.334 [2024-12-07 05:46:42.395696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.396014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.396025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.334 qpair failed and we were unable to recover it. 00:31:39.334 [2024-12-07 05:46:42.396326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.396631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.396641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.334 qpair failed and we were unable to recover it. 00:31:39.334 [2024-12-07 05:46:42.396923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.397219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.397229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.334 qpair failed and we were unable to recover it. 00:31:39.334 [2024-12-07 05:46:42.397381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.397561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.397571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.334 qpair failed and we were unable to recover it. 00:31:39.334 [2024-12-07 05:46:42.397874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.398155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.398166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.334 qpair failed and we were unable to recover it. 00:31:39.334 [2024-12-07 05:46:42.398489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.398625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.398635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.334 qpair failed and we were unable to recover it. 00:31:39.334 [2024-12-07 05:46:42.398941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.399256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.399267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.334 qpair failed and we were unable to recover it. 00:31:39.334 [2024-12-07 05:46:42.399583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.399879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.399889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.334 qpair failed and we were unable to recover it. 00:31:39.334 [2024-12-07 05:46:42.400217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.334 [2024-12-07 05:46:42.400380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.400390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.335 qpair failed and we were unable to recover it. 00:31:39.335 [2024-12-07 05:46:42.400589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.400868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.400879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.335 qpair failed and we were unable to recover it. 00:31:39.335 [2024-12-07 05:46:42.401040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.401425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.401434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.335 qpair failed and we were unable to recover it. 00:31:39.335 [2024-12-07 05:46:42.401744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.402071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.402081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.335 qpair failed and we were unable to recover it. 00:31:39.335 [2024-12-07 05:46:42.402369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.402706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.402717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.335 qpair failed and we were unable to recover it. 00:31:39.335 [2024-12-07 05:46:42.402883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.402956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.402966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.335 qpair failed and we were unable to recover it. 00:31:39.335 [2024-12-07 05:46:42.403279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.403585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.403598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.335 qpair failed and we were unable to recover it. 00:31:39.335 [2024-12-07 05:46:42.403888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.404056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.404067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.335 qpair failed and we were unable to recover it. 00:31:39.335 [2024-12-07 05:46:42.404343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.404515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.404526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.335 qpair failed and we were unable to recover it. 00:31:39.335 [2024-12-07 05:46:42.404851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.405171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.405183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.335 qpair failed and we were unable to recover it. 00:31:39.335 [2024-12-07 05:46:42.405377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.405557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.405567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.335 qpair failed and we were unable to recover it. 00:31:39.335 [2024-12-07 05:46:42.405611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.405828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.405838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.335 qpair failed and we were unable to recover it. 00:31:39.335 [2024-12-07 05:46:42.405989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.406284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.406294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.335 qpair failed and we were unable to recover it. 00:31:39.335 [2024-12-07 05:46:42.406592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.406828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.406839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.335 qpair failed and we were unable to recover it. 00:31:39.335 [2024-12-07 05:46:42.407183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.407505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.407515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.335 qpair failed and we were unable to recover it. 00:31:39.335 [2024-12-07 05:46:42.407816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.408139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.408149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.335 qpair failed and we were unable to recover it. 00:31:39.335 [2024-12-07 05:46:42.408288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.408456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.408466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.335 qpair failed and we were unable to recover it. 00:31:39.335 [2024-12-07 05:46:42.408682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.408983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.408994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.335 qpair failed and we were unable to recover it. 00:31:39.335 [2024-12-07 05:46:42.409178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.409374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.409384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.335 qpair failed and we were unable to recover it. 00:31:39.335 [2024-12-07 05:46:42.409656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.409841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.409852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.335 qpair failed and we were unable to recover it. 00:31:39.335 [2024-12-07 05:46:42.410029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.410321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.410332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.335 qpair failed and we were unable to recover it. 00:31:39.335 [2024-12-07 05:46:42.410463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.410730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.410740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.335 qpair failed and we were unable to recover it. 00:31:39.335 [2024-12-07 05:46:42.411070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.411261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.411271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.335 qpair failed and we were unable to recover it. 00:31:39.335 [2024-12-07 05:46:42.411614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.411953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.411963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.335 qpair failed and we were unable to recover it. 00:31:39.335 [2024-12-07 05:46:42.412267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.412558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.335 [2024-12-07 05:46:42.412568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.335 qpair failed and we were unable to recover it. 00:31:39.336 [2024-12-07 05:46:42.412880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.413034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.413045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.336 qpair failed and we were unable to recover it. 00:31:39.336 [2024-12-07 05:46:42.413354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.413672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.413683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.336 qpair failed and we were unable to recover it. 00:31:39.336 [2024-12-07 05:46:42.414018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.414399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.414410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.336 qpair failed and we were unable to recover it. 00:31:39.336 [2024-12-07 05:46:42.414708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.415027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.415038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.336 qpair failed and we were unable to recover it. 00:31:39.336 [2024-12-07 05:46:42.415428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.415623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.415633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.336 qpair failed and we were unable to recover it. 00:31:39.336 [2024-12-07 05:46:42.415824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.416099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.416109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.336 qpair failed and we were unable to recover it. 00:31:39.336 [2024-12-07 05:46:42.416306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.416366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.416375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.336 qpair failed and we were unable to recover it. 00:31:39.336 [2024-12-07 05:46:42.416649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.416969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.416979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.336 qpair failed and we were unable to recover it. 00:31:39.336 [2024-12-07 05:46:42.417290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.417603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.417613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.336 qpair failed and we were unable to recover it. 00:31:39.336 [2024-12-07 05:46:42.417837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.418142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.418155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.336 qpair failed and we were unable to recover it. 00:31:39.336 [2024-12-07 05:46:42.418234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.418535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.418544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.336 qpair failed and we were unable to recover it. 00:31:39.336 [2024-12-07 05:46:42.418777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.418923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.418933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.336 qpair failed and we were unable to recover it. 00:31:39.336 [2024-12-07 05:46:42.419247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.419443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.419454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.336 qpair failed and we were unable to recover it. 00:31:39.336 [2024-12-07 05:46:42.419760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.420035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.420045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.336 qpair failed and we were unable to recover it. 00:31:39.336 [2024-12-07 05:46:42.420367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.420709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.420719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.336 qpair failed and we were unable to recover it. 00:31:39.336 [2024-12-07 05:46:42.421031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.421378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.421389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.336 qpair failed and we were unable to recover it. 00:31:39.336 [2024-12-07 05:46:42.421669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.421962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.421972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.336 qpair failed and we were unable to recover it. 00:31:39.336 [2024-12-07 05:46:42.422288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.422609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.422619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.336 qpair failed and we were unable to recover it. 00:31:39.336 [2024-12-07 05:46:42.422825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.423101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.423112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.336 qpair failed and we were unable to recover it. 00:31:39.336 [2024-12-07 05:46:42.423326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.423369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.423379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.336 qpair failed and we were unable to recover it. 00:31:39.336 [2024-12-07 05:46:42.423552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.423709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.423719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.336 qpair failed and we were unable to recover it. 00:31:39.336 [2024-12-07 05:46:42.424033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.424233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.424243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.336 qpair failed and we were unable to recover it. 00:31:39.336 [2024-12-07 05:46:42.424436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.424649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.424659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.336 qpair failed and we were unable to recover it. 00:31:39.336 [2024-12-07 05:46:42.424940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.425260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.425271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.336 qpair failed and we were unable to recover it. 00:31:39.336 [2024-12-07 05:46:42.425448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.425656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.425666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.336 qpair failed and we were unable to recover it. 00:31:39.336 [2024-12-07 05:46:42.425843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.426106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.426116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.336 qpair failed and we were unable to recover it. 00:31:39.336 [2024-12-07 05:46:42.426431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.426712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.426723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.336 qpair failed and we were unable to recover it. 00:31:39.336 [2024-12-07 05:46:42.426993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.427180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.427190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.336 qpair failed and we were unable to recover it. 00:31:39.336 [2024-12-07 05:46:42.427494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.336 [2024-12-07 05:46:42.427807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.427817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.337 qpair failed and we were unable to recover it. 00:31:39.337 [2024-12-07 05:46:42.428152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.428324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.428334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.337 qpair failed and we were unable to recover it. 00:31:39.337 [2024-12-07 05:46:42.428519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.428821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.428831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.337 qpair failed and we were unable to recover it. 00:31:39.337 [2024-12-07 05:46:42.429112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 05:46:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:39.337 [2024-12-07 05:46:42.429418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.429429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.337 qpair failed and we were unable to recover it. 00:31:39.337 [2024-12-07 05:46:42.429720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 05:46:42 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:39.337 05:46:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.337 [2024-12-07 05:46:42.430082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.430101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.337 qpair failed and we were unable to recover it. 00:31:39.337 05:46:42 -- common/autotest_common.sh@10 -- # set +x 00:31:39.337 [2024-12-07 05:46:42.431117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.431455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.431466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.337 qpair failed and we were unable to recover it. 00:31:39.337 [2024-12-07 05:46:42.431780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.431963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.431973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.337 qpair failed and we were unable to recover it. 00:31:39.337 [2024-12-07 05:46:42.432149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.432327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.432337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.337 qpair failed and we were unable to recover it. 00:31:39.337 [2024-12-07 05:46:42.432651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.432822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.432833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.337 qpair failed and we were unable to recover it. 00:31:39.337 [2024-12-07 05:46:42.433153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.433505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.433515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.337 qpair failed and we were unable to recover it. 00:31:39.337 [2024-12-07 05:46:42.433700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.434023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.434033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.337 qpair failed and we were unable to recover it. 00:31:39.337 [2024-12-07 05:46:42.434327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.434511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.434521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.337 qpair failed and we were unable to recover it. 00:31:39.337 [2024-12-07 05:46:42.434831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.435017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.435027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.337 qpair failed and we were unable to recover it. 00:31:39.337 [2024-12-07 05:46:42.435230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.435414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.435423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.337 qpair failed and we were unable to recover it. 00:31:39.337 [2024-12-07 05:46:42.435700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.436039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.436049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.337 qpair failed and we were unable to recover it. 00:31:39.337 [2024-12-07 05:46:42.436383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.436707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.436718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.337 qpair failed and we were unable to recover it. 00:31:39.337 [2024-12-07 05:46:42.436903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.437072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.437082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.337 qpair failed and we were unable to recover it. 00:31:39.337 [2024-12-07 05:46:42.437245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.437588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.437597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.337 qpair failed and we were unable to recover it. 00:31:39.337 [2024-12-07 05:46:42.437873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.438212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.438222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.337 qpair failed and we were unable to recover it. 00:31:39.337 [2024-12-07 05:46:42.438566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.438726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.438735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.337 qpair failed and we were unable to recover it. 00:31:39.337 [2024-12-07 05:46:42.439007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.439344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.439354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.337 qpair failed and we were unable to recover it. 00:31:39.337 [2024-12-07 05:46:42.439654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.439965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.439975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.337 qpair failed and we were unable to recover it. 00:31:39.337 [2024-12-07 05:46:42.440323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.440636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.440645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.337 qpair failed and we were unable to recover it. 00:31:39.337 [2024-12-07 05:46:42.440839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.441004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.441021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.337 qpair failed and we were unable to recover it. 00:31:39.337 [2024-12-07 05:46:42.441430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.441625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.441635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.337 qpair failed and we were unable to recover it. 00:31:39.337 [2024-12-07 05:46:42.442000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.442295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.442305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.337 qpair failed and we were unable to recover it. 00:31:39.337 [2024-12-07 05:46:42.442622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.442941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.442951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.337 qpair failed and we were unable to recover it. 00:31:39.337 [2024-12-07 05:46:42.443005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.443160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.443170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.337 qpair failed and we were unable to recover it. 00:31:39.337 [2024-12-07 05:46:42.443505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.337 [2024-12-07 05:46:42.443822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.443832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.338 qpair failed and we were unable to recover it. 00:31:39.338 [2024-12-07 05:46:42.444057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.444396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.444406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.338 qpair failed and we were unable to recover it. 00:31:39.338 [2024-12-07 05:46:42.444728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.445058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.445068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.338 qpair failed and we were unable to recover it. 00:31:39.338 [2024-12-07 05:46:42.445379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.445568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.445578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.338 qpair failed and we were unable to recover it. 00:31:39.338 [2024-12-07 05:46:42.445971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.446259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.446269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.338 qpair failed and we were unable to recover it. 00:31:39.338 [2024-12-07 05:46:42.446460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.446746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.446756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.338 qpair failed and we were unable to recover it. 00:31:39.338 [2024-12-07 05:46:42.447076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.447375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.447385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.338 qpair failed and we were unable to recover it. 00:31:39.338 [2024-12-07 05:46:42.447755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.448059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.448070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.338 qpair failed and we were unable to recover it. 00:31:39.338 Malloc0 00:31:39.338 [2024-12-07 05:46:42.448277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.448596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.448606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.338 qpair failed and we were unable to recover it. 00:31:39.338 [2024-12-07 05:46:42.448832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 05:46:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.338 [2024-12-07 05:46:42.449174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.449184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.338 qpair failed and we were unable to recover it. 00:31:39.338 05:46:42 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:39.338 [2024-12-07 05:46:42.449397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.449670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.449680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.338 05:46:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.338 qpair failed and we were unable to recover it. 00:31:39.338 [2024-12-07 05:46:42.449885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 05:46:42 -- common/autotest_common.sh@10 -- # set +x 00:31:39.338 [2024-12-07 05:46:42.450217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.450227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.338 qpair failed and we were unable to recover it. 00:31:39.338 [2024-12-07 05:46:42.450569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.450876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.450886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.338 qpair failed and we were unable to recover it. 00:31:39.338 [2024-12-07 05:46:42.451227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.451527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.451537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.338 qpair failed and we were unable to recover it. 00:31:39.338 [2024-12-07 05:46:42.451820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.452023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.452033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.338 qpair failed and we were unable to recover it. 00:31:39.338 [2024-12-07 05:46:42.452325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.452711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.452721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.338 qpair failed and we were unable to recover it. 00:31:39.338 [2024-12-07 05:46:42.453019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.453318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.453328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.338 qpair failed and we were unable to recover it. 00:31:39.338 [2024-12-07 05:46:42.453597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.453795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.453804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.338 qpair failed and we were unable to recover it. 00:31:39.338 [2024-12-07 05:46:42.453983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.454210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.454220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.338 qpair failed and we were unable to recover it. 00:31:39.338 [2024-12-07 05:46:42.454400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.454683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.454693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.338 qpair failed and we were unable to recover it. 00:31:39.338 [2024-12-07 05:46:42.454849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.455038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.455049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.338 qpair failed and we were unable to recover it. 00:31:39.338 [2024-12-07 05:46:42.455228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.455395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.455406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.338 qpair failed and we were unable to recover it. 00:31:39.338 [2024-12-07 05:46:42.455687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.455693] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:39.338 [2024-12-07 05:46:42.456007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.456020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.338 qpair failed and we were unable to recover it. 00:31:39.338 [2024-12-07 05:46:42.456124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.456352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.456362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.338 qpair failed and we were unable to recover it. 00:31:39.338 [2024-12-07 05:46:42.456550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.456721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.456731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.338 qpair failed and we were unable to recover it. 00:31:39.338 [2024-12-07 05:46:42.456904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.457201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.457211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.338 qpair failed and we were unable to recover it. 00:31:39.338 [2024-12-07 05:46:42.457419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.457761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.457771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.338 qpair failed and we were unable to recover it. 00:31:39.338 [2024-12-07 05:46:42.458137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.458474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.338 [2024-12-07 05:46:42.458483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.338 qpair failed and we were unable to recover it. 00:31:39.338 [2024-12-07 05:46:42.458849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.459122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.459132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.339 qpair failed and we were unable to recover it. 00:31:39.339 [2024-12-07 05:46:42.459388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.459548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.459558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.339 qpair failed and we were unable to recover it. 00:31:39.339 [2024-12-07 05:46:42.459821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.460017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.460027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.339 qpair failed and we were unable to recover it. 00:31:39.339 [2024-12-07 05:46:42.460233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.460543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.460553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.339 qpair failed and we were unable to recover it. 00:31:39.339 [2024-12-07 05:46:42.460765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.461064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.461074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.339 qpair failed and we were unable to recover it. 00:31:39.339 [2024-12-07 05:46:42.461130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.461451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.461461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.339 qpair failed and we were unable to recover it. 00:31:39.339 [2024-12-07 05:46:42.461772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.462145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.462155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.339 qpair failed and we were unable to recover it. 00:31:39.339 [2024-12-07 05:46:42.462367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.462546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.462556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.339 qpair failed and we were unable to recover it. 00:31:39.339 [2024-12-07 05:46:42.462741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.463021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.463031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.339 qpair failed and we were unable to recover it. 00:31:39.339 [2024-12-07 05:46:42.463226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.463278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.463288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.339 qpair failed and we were unable to recover it. 00:31:39.339 [2024-12-07 05:46:42.463451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.463526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.463535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.339 qpair failed and we were unable to recover it. 00:31:39.339 [2024-12-07 05:46:42.463848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.463916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.463926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.339 qpair failed and we were unable to recover it. 00:31:39.339 [2024-12-07 05:46:42.464224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.464568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.464577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.339 qpair failed and we were unable to recover it. 00:31:39.339 05:46:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.339 [2024-12-07 05:46:42.464793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.464972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.464982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.339 qpair failed and we were unable to recover it. 00:31:39.339 05:46:42 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:39.339 [2024-12-07 05:46:42.465315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 05:46:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.339 [2024-12-07 05:46:42.465633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.465644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.339 qpair failed and we were unable to recover it. 00:31:39.339 05:46:42 -- common/autotest_common.sh@10 -- # set +x 00:31:39.339 [2024-12-07 05:46:42.465954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.466306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.466317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.339 qpair failed and we were unable to recover it. 00:31:39.339 [2024-12-07 05:46:42.466632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.466939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.466949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.339 qpair failed and we were unable to recover it. 00:31:39.339 [2024-12-07 05:46:42.467145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.467311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.467323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.339 qpair failed and we were unable to recover it. 00:31:39.339 [2024-12-07 05:46:42.467634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.467978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.467989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.339 qpair failed and we were unable to recover it. 00:31:39.339 [2024-12-07 05:46:42.468301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.468496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.468506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.339 qpair failed and we were unable to recover it. 00:31:39.339 [2024-12-07 05:46:42.468834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.469155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.469165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.339 qpair failed and we were unable to recover it. 00:31:39.339 [2024-12-07 05:46:42.469453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.469498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.469508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.339 qpair failed and we were unable to recover it. 00:31:39.339 [2024-12-07 05:46:42.469592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.469808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.469818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.339 qpair failed and we were unable to recover it. 00:31:39.339 [2024-12-07 05:46:42.470130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.470453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.470463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.339 qpair failed and we were unable to recover it. 00:31:39.339 [2024-12-07 05:46:42.470660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.470976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.339 [2024-12-07 05:46:42.470986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.339 qpair failed and we were unable to recover it. 00:31:39.340 [2024-12-07 05:46:42.471304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.471588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.471598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.340 qpair failed and we were unable to recover it. 00:31:39.340 [2024-12-07 05:46:42.471895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.472300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.472310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.340 qpair failed and we were unable to recover it. 00:31:39.340 [2024-12-07 05:46:42.472620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.472785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.472795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.340 qpair failed and we were unable to recover it. 00:31:39.340 [2024-12-07 05:46:42.472844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.473152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.473162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.340 qpair failed and we were unable to recover it. 00:31:39.340 [2024-12-07 05:46:42.473444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.473629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.473638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.340 qpair failed and we were unable to recover it. 00:31:39.340 [2024-12-07 05:46:42.473812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.474024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.474034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.340 qpair failed and we were unable to recover it. 00:31:39.340 [2024-12-07 05:46:42.474339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.474528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.474538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.340 qpair failed and we were unable to recover it. 00:31:39.340 [2024-12-07 05:46:42.474855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.475174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.475184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.340 qpair failed and we were unable to recover it. 00:31:39.340 [2024-12-07 05:46:42.475517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.475802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.475811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.340 qpair failed and we were unable to recover it. 00:31:39.340 [2024-12-07 05:46:42.476024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.476202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.476211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.340 qpair failed and we were unable to recover it. 00:31:39.340 [2024-12-07 05:46:42.476512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 05:46:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.340 [2024-12-07 05:46:42.476838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.476848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.340 qpair failed and we were unable to recover it. 00:31:39.340 05:46:42 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:39.340 [2024-12-07 05:46:42.477181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 05:46:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.340 [2024-12-07 05:46:42.477372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.477384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.340 qpair failed and we were unable to recover it. 00:31:39.340 05:46:42 -- common/autotest_common.sh@10 -- # set +x 00:31:39.340 [2024-12-07 05:46:42.477618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.477807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.477817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.340 qpair failed and we were unable to recover it. 00:31:39.340 Read completed with error (sct=0, sc=8) 00:31:39.340 starting I/O failed 00:31:39.340 Read completed with error (sct=0, sc=8) 00:31:39.340 starting I/O failed 00:31:39.340 Read completed with error (sct=0, sc=8) 00:31:39.340 starting I/O failed 00:31:39.340 Read completed with error (sct=0, sc=8) 00:31:39.340 starting I/O failed 00:31:39.340 Read completed with error (sct=0, sc=8) 00:31:39.340 starting I/O failed 00:31:39.340 Read completed with error (sct=0, sc=8) 00:31:39.340 starting I/O failed 00:31:39.340 Read completed with error (sct=0, sc=8) 00:31:39.340 starting I/O failed 00:31:39.340 Read completed with error (sct=0, sc=8) 00:31:39.340 starting I/O failed 00:31:39.340 Read completed with error (sct=0, sc=8) 00:31:39.340 starting I/O failed 00:31:39.340 Read completed with error (sct=0, sc=8) 00:31:39.340 starting I/O failed 00:31:39.340 Write completed with error (sct=0, sc=8) 00:31:39.340 starting I/O failed 00:31:39.340 Write completed with error (sct=0, sc=8) 00:31:39.340 starting I/O failed 00:31:39.340 Write completed with error (sct=0, sc=8) 00:31:39.340 starting I/O failed 00:31:39.340 Read completed with error (sct=0, sc=8) 00:31:39.340 starting I/O failed 00:31:39.340 Write completed with error (sct=0, sc=8) 00:31:39.340 starting I/O failed 00:31:39.340 Write completed with error (sct=0, sc=8) 00:31:39.340 starting I/O failed 00:31:39.340 Write completed with error (sct=0, sc=8) 00:31:39.340 starting I/O failed 00:31:39.340 Read completed with error (sct=0, sc=8) 00:31:39.340 starting I/O failed 00:31:39.340 Write completed with error (sct=0, sc=8) 00:31:39.340 starting I/O failed 00:31:39.340 Write completed with error (sct=0, sc=8) 00:31:39.340 starting I/O failed 00:31:39.340 Read completed with error (sct=0, sc=8) 00:31:39.340 starting I/O failed 00:31:39.340 Write completed with error (sct=0, sc=8) 00:31:39.340 starting I/O failed 00:31:39.340 Write completed with error (sct=0, sc=8) 00:31:39.340 starting I/O failed 00:31:39.340 Read completed with error (sct=0, sc=8) 00:31:39.340 starting I/O failed 00:31:39.340 Read completed with error (sct=0, sc=8) 00:31:39.340 starting I/O failed 00:31:39.340 Write completed with error (sct=0, sc=8) 00:31:39.340 starting I/O failed 00:31:39.340 Read completed with error (sct=0, sc=8) 00:31:39.340 starting I/O failed 00:31:39.340 Read completed with error (sct=0, sc=8) 00:31:39.340 starting I/O failed 00:31:39.340 Read completed with error (sct=0, sc=8) 00:31:39.340 starting I/O failed 00:31:39.340 Read completed with error (sct=0, sc=8) 00:31:39.340 starting I/O failed 00:31:39.340 Write completed with error (sct=0, sc=8) 00:31:39.340 starting I/O failed 00:31:39.340 Write completed with error (sct=0, sc=8) 00:31:39.340 starting I/O failed 00:31:39.340 [2024-12-07 05:46:42.478541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.340 [2024-12-07 05:46:42.478990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.479510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.479600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffb20000b90 with addr=10.0.0.2, port=4420 00:31:39.340 qpair failed and we were unable to recover it. 00:31:39.340 [2024-12-07 05:46:42.479794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.480296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.480386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffb20000b90 with addr=10.0.0.2, port=4420 00:31:39.340 qpair failed and we were unable to recover it. 00:31:39.340 [2024-12-07 05:46:42.480746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.480941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.480951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.340 qpair failed and we were unable to recover it. 00:31:39.340 [2024-12-07 05:46:42.481249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.481397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.481407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.340 qpair failed and we were unable to recover it. 00:31:39.340 [2024-12-07 05:46:42.481738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.481792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.481802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.340 qpair failed and we were unable to recover it. 00:31:39.340 [2024-12-07 05:46:42.481990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.482187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.482197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.340 qpair failed and we were unable to recover it. 00:31:39.340 [2024-12-07 05:46:42.482532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.340 [2024-12-07 05:46:42.482866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.482876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.341 qpair failed and we were unable to recover it. 00:31:39.341 [2024-12-07 05:46:42.483182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.483227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.483237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.341 qpair failed and we were unable to recover it. 00:31:39.341 [2024-12-07 05:46:42.483522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.483810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.483820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.341 qpair failed and we were unable to recover it. 00:31:39.341 [2024-12-07 05:46:42.484139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.484317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.484326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.341 qpair failed and we were unable to recover it. 00:31:39.341 [2024-12-07 05:46:42.484614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.484788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.484798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.341 qpair failed and we were unable to recover it. 00:31:39.341 [2024-12-07 05:46:42.485026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.485214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.485224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.341 qpair failed and we were unable to recover it. 00:31:39.341 [2024-12-07 05:46:42.485410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.485749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.485759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.341 qpair failed and we were unable to recover it. 00:31:39.341 [2024-12-07 05:46:42.486060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.486244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.486254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.341 qpair failed and we were unable to recover it. 00:31:39.341 [2024-12-07 05:46:42.486413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.486697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.486709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.341 qpair failed and we were unable to recover it. 00:31:39.341 [2024-12-07 05:46:42.486887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.487193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.487204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.341 qpair failed and we were unable to recover it. 00:31:39.341 [2024-12-07 05:46:42.487509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.487819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.487829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.341 qpair failed and we were unable to recover it. 00:31:39.341 [2024-12-07 05:46:42.488146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.488353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.488362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.341 qpair failed and we were unable to recover it. 00:31:39.341 [2024-12-07 05:46:42.488696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 05:46:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.341 [2024-12-07 05:46:42.489040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.489051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.341 qpair failed and we were unable to recover it. 00:31:39.341 05:46:42 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:39.341 [2024-12-07 05:46:42.489372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 05:46:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.341 05:46:42 -- common/autotest_common.sh@10 -- # set +x 00:31:39.341 [2024-12-07 05:46:42.489683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.489693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.341 qpair failed and we were unable to recover it. 00:31:39.341 [2024-12-07 05:46:42.490015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.490252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.490262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.341 qpair failed and we were unable to recover it. 00:31:39.341 [2024-12-07 05:46:42.490567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.490863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.490873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.341 qpair failed and we were unable to recover it. 00:31:39.341 [2024-12-07 05:46:42.491193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.491537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.491547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.341 qpair failed and we were unable to recover it. 00:31:39.341 [2024-12-07 05:46:42.491925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.491974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.491983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.341 qpair failed and we were unable to recover it. 00:31:39.341 [2024-12-07 05:46:42.492306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.492482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.492492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.341 qpair failed and we were unable to recover it. 00:31:39.341 [2024-12-07 05:46:42.492666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.492849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.492859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.341 qpair failed and we were unable to recover it. 00:31:39.341 [2024-12-07 05:46:42.493166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.493475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.493485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.341 qpair failed and we were unable to recover it. 00:31:39.341 [2024-12-07 05:46:42.493664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.494004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.494017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.341 qpair failed and we were unable to recover it. 00:31:39.341 [2024-12-07 05:46:42.494301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.494470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.494480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.341 qpair failed and we were unable to recover it. 00:31:39.341 [2024-12-07 05:46:42.494674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.494865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.494875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.341 qpair failed and we were unable to recover it. 00:31:39.341 [2024-12-07 05:46:42.495188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.495370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.495380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.341 qpair failed and we were unable to recover it. 00:31:39.341 [2024-12-07 05:46:42.495546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.495902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.341 [2024-12-07 05:46:42.495912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x718380 with addr=10.0.0.2, port=4420 00:31:39.341 qpair failed and we were unable to recover it. 00:31:39.341 [2024-12-07 05:46:42.496036] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:39.341 05:46:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.341 05:46:42 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:39.341 05:46:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.341 05:46:42 -- common/autotest_common.sh@10 -- # set +x 00:31:39.341 [2024-12-07 05:46:42.506619] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.341 [2024-12-07 05:46:42.506697] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.341 [2024-12-07 05:46:42.506717] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.341 [2024-12-07 05:46:42.506728] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.342 [2024-12-07 05:46:42.506735] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.342 [2024-12-07 05:46:42.506752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.342 qpair failed and we were unable to recover it. 00:31:39.342 05:46:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.342 05:46:42 -- host/target_disconnect.sh@58 -- # wait 2034960 00:31:39.342 [2024-12-07 05:46:42.516613] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.342 [2024-12-07 05:46:42.516711] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.342 [2024-12-07 05:46:42.516726] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.342 [2024-12-07 05:46:42.516734] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.342 [2024-12-07 05:46:42.516741] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.342 [2024-12-07 05:46:42.516755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.342 qpair failed and we were unable to recover it. 00:31:39.342 [2024-12-07 05:46:42.526601] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.342 [2024-12-07 05:46:42.526654] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.342 [2024-12-07 05:46:42.526670] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.342 [2024-12-07 05:46:42.526677] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.342 [2024-12-07 05:46:42.526684] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.342 [2024-12-07 05:46:42.526697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.342 qpair failed and we were unable to recover it. 00:31:39.342 [2024-12-07 05:46:42.536592] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.342 [2024-12-07 05:46:42.536657] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.342 [2024-12-07 05:46:42.536672] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.342 [2024-12-07 05:46:42.536679] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.342 [2024-12-07 05:46:42.536686] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.342 [2024-12-07 05:46:42.536700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.342 qpair failed and we were unable to recover it. 00:31:39.342 [2024-12-07 05:46:42.546584] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.342 [2024-12-07 05:46:42.546653] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.342 [2024-12-07 05:46:42.546668] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.342 [2024-12-07 05:46:42.546676] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.342 [2024-12-07 05:46:42.546682] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.342 [2024-12-07 05:46:42.546701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.342 qpair failed and we were unable to recover it. 00:31:39.605 [2024-12-07 05:46:42.556662] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.605 [2024-12-07 05:46:42.556731] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.605 [2024-12-07 05:46:42.556746] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.605 [2024-12-07 05:46:42.556753] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.605 [2024-12-07 05:46:42.556760] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.605 [2024-12-07 05:46:42.556773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.605 qpair failed and we were unable to recover it. 00:31:39.605 [2024-12-07 05:46:42.566652] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.605 [2024-12-07 05:46:42.566710] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.605 [2024-12-07 05:46:42.566727] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.605 [2024-12-07 05:46:42.566734] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.605 [2024-12-07 05:46:42.566741] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.605 [2024-12-07 05:46:42.566754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.605 qpair failed and we were unable to recover it. 00:31:39.605 [2024-12-07 05:46:42.576663] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.606 [2024-12-07 05:46:42.576724] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.606 [2024-12-07 05:46:42.576739] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.606 [2024-12-07 05:46:42.576746] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.606 [2024-12-07 05:46:42.576753] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.606 [2024-12-07 05:46:42.576766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.606 qpair failed and we were unable to recover it. 00:31:39.606 [2024-12-07 05:46:42.586663] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.606 [2024-12-07 05:46:42.586720] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.606 [2024-12-07 05:46:42.586734] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.606 [2024-12-07 05:46:42.586741] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.606 [2024-12-07 05:46:42.586748] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.606 [2024-12-07 05:46:42.586762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.606 qpair failed and we were unable to recover it. 00:31:39.606 [2024-12-07 05:46:42.596688] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.606 [2024-12-07 05:46:42.596737] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.606 [2024-12-07 05:46:42.596755] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.606 [2024-12-07 05:46:42.596762] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.606 [2024-12-07 05:46:42.596769] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.606 [2024-12-07 05:46:42.596783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.606 qpair failed and we were unable to recover it. 00:31:39.606 [2024-12-07 05:46:42.606736] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.606 [2024-12-07 05:46:42.606796] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.606 [2024-12-07 05:46:42.606811] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.606 [2024-12-07 05:46:42.606818] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.606 [2024-12-07 05:46:42.606825] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.606 [2024-12-07 05:46:42.606838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.606 qpair failed and we were unable to recover it. 00:31:39.606 [2024-12-07 05:46:42.616634] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.606 [2024-12-07 05:46:42.616698] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.606 [2024-12-07 05:46:42.616712] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.606 [2024-12-07 05:46:42.616719] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.606 [2024-12-07 05:46:42.616726] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.606 [2024-12-07 05:46:42.616739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.606 qpair failed and we were unable to recover it. 00:31:39.606 [2024-12-07 05:46:42.626793] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.606 [2024-12-07 05:46:42.626895] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.606 [2024-12-07 05:46:42.626910] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.606 [2024-12-07 05:46:42.626917] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.606 [2024-12-07 05:46:42.626923] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.606 [2024-12-07 05:46:42.626937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.606 qpair failed and we were unable to recover it. 00:31:39.606 [2024-12-07 05:46:42.636805] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.606 [2024-12-07 05:46:42.636870] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.606 [2024-12-07 05:46:42.636883] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.606 [2024-12-07 05:46:42.636890] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.606 [2024-12-07 05:46:42.636897] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.606 [2024-12-07 05:46:42.636915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.606 qpair failed and we were unable to recover it. 00:31:39.606 [2024-12-07 05:46:42.646825] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.606 [2024-12-07 05:46:42.646884] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.606 [2024-12-07 05:46:42.646899] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.606 [2024-12-07 05:46:42.646907] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.606 [2024-12-07 05:46:42.646913] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.606 [2024-12-07 05:46:42.646926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.606 qpair failed and we were unable to recover it. 00:31:39.606 [2024-12-07 05:46:42.656884] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.606 [2024-12-07 05:46:42.656948] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.606 [2024-12-07 05:46:42.656962] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.606 [2024-12-07 05:46:42.656968] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.606 [2024-12-07 05:46:42.656975] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.606 [2024-12-07 05:46:42.656988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.606 qpair failed and we were unable to recover it. 00:31:39.606 [2024-12-07 05:46:42.666906] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.606 [2024-12-07 05:46:42.667001] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.606 [2024-12-07 05:46:42.667019] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.606 [2024-12-07 05:46:42.667026] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.606 [2024-12-07 05:46:42.667033] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.606 [2024-12-07 05:46:42.667047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.606 qpair failed and we were unable to recover it. 00:31:39.606 [2024-12-07 05:46:42.676932] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.606 [2024-12-07 05:46:42.676984] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.606 [2024-12-07 05:46:42.676998] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.606 [2024-12-07 05:46:42.677005] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.606 [2024-12-07 05:46:42.677017] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.606 [2024-12-07 05:46:42.677031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.606 qpair failed and we were unable to recover it. 00:31:39.606 [2024-12-07 05:46:42.686973] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.606 [2024-12-07 05:46:42.687060] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.606 [2024-12-07 05:46:42.687078] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.606 [2024-12-07 05:46:42.687086] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.606 [2024-12-07 05:46:42.687092] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.606 [2024-12-07 05:46:42.687106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.606 qpair failed and we were unable to recover it. 00:31:39.606 [2024-12-07 05:46:42.696978] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.606 [2024-12-07 05:46:42.697050] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.606 [2024-12-07 05:46:42.697066] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.606 [2024-12-07 05:46:42.697073] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.606 [2024-12-07 05:46:42.697079] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.606 [2024-12-07 05:46:42.697094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.606 qpair failed and we were unable to recover it. 00:31:39.606 [2024-12-07 05:46:42.706966] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.606 [2024-12-07 05:46:42.707034] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.606 [2024-12-07 05:46:42.707048] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.607 [2024-12-07 05:46:42.707055] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.607 [2024-12-07 05:46:42.707062] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.607 [2024-12-07 05:46:42.707075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.607 qpair failed and we were unable to recover it. 00:31:39.607 [2024-12-07 05:46:42.717014] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.607 [2024-12-07 05:46:42.717074] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.607 [2024-12-07 05:46:42.717088] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.607 [2024-12-07 05:46:42.717096] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.607 [2024-12-07 05:46:42.717103] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.607 [2024-12-07 05:46:42.717117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.607 qpair failed and we were unable to recover it. 00:31:39.607 [2024-12-07 05:46:42.727056] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.607 [2024-12-07 05:46:42.727111] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.607 [2024-12-07 05:46:42.727125] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.607 [2024-12-07 05:46:42.727133] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.607 [2024-12-07 05:46:42.727139] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.607 [2024-12-07 05:46:42.727156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.607 qpair failed and we were unable to recover it. 00:31:39.607 [2024-12-07 05:46:42.737121] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.607 [2024-12-07 05:46:42.737181] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.607 [2024-12-07 05:46:42.737196] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.607 [2024-12-07 05:46:42.737203] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.607 [2024-12-07 05:46:42.737210] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.607 [2024-12-07 05:46:42.737224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.607 qpair failed and we were unable to recover it. 00:31:39.607 [2024-12-07 05:46:42.747053] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.607 [2024-12-07 05:46:42.747139] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.607 [2024-12-07 05:46:42.747154] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.607 [2024-12-07 05:46:42.747162] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.607 [2024-12-07 05:46:42.747168] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.607 [2024-12-07 05:46:42.747183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.607 qpair failed and we were unable to recover it. 00:31:39.607 [2024-12-07 05:46:42.757170] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.607 [2024-12-07 05:46:42.757229] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.607 [2024-12-07 05:46:42.757244] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.607 [2024-12-07 05:46:42.757251] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.607 [2024-12-07 05:46:42.757258] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.607 [2024-12-07 05:46:42.757272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.607 qpair failed and we were unable to recover it. 00:31:39.607 [2024-12-07 05:46:42.767193] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.607 [2024-12-07 05:46:42.767246] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.607 [2024-12-07 05:46:42.767260] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.607 [2024-12-07 05:46:42.767268] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.607 [2024-12-07 05:46:42.767274] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.607 [2024-12-07 05:46:42.767288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.607 qpair failed and we were unable to recover it. 00:31:39.607 [2024-12-07 05:46:42.777210] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.607 [2024-12-07 05:46:42.777272] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.607 [2024-12-07 05:46:42.777292] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.607 [2024-12-07 05:46:42.777300] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.607 [2024-12-07 05:46:42.777306] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.607 [2024-12-07 05:46:42.777320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.607 qpair failed and we were unable to recover it. 00:31:39.607 [2024-12-07 05:46:42.787369] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.607 [2024-12-07 05:46:42.787439] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.607 [2024-12-07 05:46:42.787454] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.607 [2024-12-07 05:46:42.787461] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.607 [2024-12-07 05:46:42.787468] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.607 [2024-12-07 05:46:42.787482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.607 qpair failed and we were unable to recover it. 00:31:39.607 [2024-12-07 05:46:42.797356] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.607 [2024-12-07 05:46:42.797427] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.607 [2024-12-07 05:46:42.797441] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.607 [2024-12-07 05:46:42.797449] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.607 [2024-12-07 05:46:42.797455] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.607 [2024-12-07 05:46:42.797469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.607 qpair failed and we were unable to recover it. 00:31:39.607 [2024-12-07 05:46:42.807366] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.607 [2024-12-07 05:46:42.807419] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.607 [2024-12-07 05:46:42.807433] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.607 [2024-12-07 05:46:42.807440] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.607 [2024-12-07 05:46:42.807447] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.607 [2024-12-07 05:46:42.807460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.607 qpair failed and we were unable to recover it. 00:31:39.607 [2024-12-07 05:46:42.817317] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.607 [2024-12-07 05:46:42.817379] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.607 [2024-12-07 05:46:42.817393] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.607 [2024-12-07 05:46:42.817400] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.607 [2024-12-07 05:46:42.817407] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.607 [2024-12-07 05:46:42.817423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.607 qpair failed and we were unable to recover it. 00:31:39.607 [2024-12-07 05:46:42.827383] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.607 [2024-12-07 05:46:42.827444] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.607 [2024-12-07 05:46:42.827458] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.607 [2024-12-07 05:46:42.827466] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.607 [2024-12-07 05:46:42.827472] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.607 [2024-12-07 05:46:42.827485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.607 qpair failed and we were unable to recover it. 00:31:39.607 [2024-12-07 05:46:42.837415] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.607 [2024-12-07 05:46:42.837468] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.607 [2024-12-07 05:46:42.837482] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.607 [2024-12-07 05:46:42.837490] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.607 [2024-12-07 05:46:42.837497] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.607 [2024-12-07 05:46:42.837510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.607 qpair failed and we were unable to recover it. 00:31:39.869 [2024-12-07 05:46:42.847352] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.869 [2024-12-07 05:46:42.847404] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.869 [2024-12-07 05:46:42.847418] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.869 [2024-12-07 05:46:42.847426] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.869 [2024-12-07 05:46:42.847432] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.869 [2024-12-07 05:46:42.847445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.869 qpair failed and we were unable to recover it. 00:31:39.869 [2024-12-07 05:46:42.857443] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.869 [2024-12-07 05:46:42.857502] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.869 [2024-12-07 05:46:42.857516] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.869 [2024-12-07 05:46:42.857524] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.869 [2024-12-07 05:46:42.857530] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.869 [2024-12-07 05:46:42.857544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.869 qpair failed and we were unable to recover it. 00:31:39.869 [2024-12-07 05:46:42.867455] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.869 [2024-12-07 05:46:42.867526] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.869 [2024-12-07 05:46:42.867543] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.869 [2024-12-07 05:46:42.867550] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.869 [2024-12-07 05:46:42.867557] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.869 [2024-12-07 05:46:42.867571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.869 qpair failed and we were unable to recover it. 00:31:39.869 [2024-12-07 05:46:42.877400] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.870 [2024-12-07 05:46:42.877470] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.870 [2024-12-07 05:46:42.877484] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.870 [2024-12-07 05:46:42.877492] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.870 [2024-12-07 05:46:42.877498] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.870 [2024-12-07 05:46:42.877512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.870 qpair failed and we were unable to recover it. 00:31:39.870 [2024-12-07 05:46:42.887542] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.870 [2024-12-07 05:46:42.887601] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.870 [2024-12-07 05:46:42.887616] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.870 [2024-12-07 05:46:42.887623] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.870 [2024-12-07 05:46:42.887630] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.870 [2024-12-07 05:46:42.887643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.870 qpair failed and we were unable to recover it. 00:31:39.870 [2024-12-07 05:46:42.897599] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.870 [2024-12-07 05:46:42.897660] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.870 [2024-12-07 05:46:42.897674] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.870 [2024-12-07 05:46:42.897682] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.870 [2024-12-07 05:46:42.897688] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.870 [2024-12-07 05:46:42.897701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.870 qpair failed and we were unable to recover it. 00:31:39.870 [2024-12-07 05:46:42.907611] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.870 [2024-12-07 05:46:42.907666] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.870 [2024-12-07 05:46:42.907679] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.870 [2024-12-07 05:46:42.907687] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.870 [2024-12-07 05:46:42.907696] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.870 [2024-12-07 05:46:42.907710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.870 qpair failed and we were unable to recover it. 00:31:39.870 [2024-12-07 05:46:42.917633] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.870 [2024-12-07 05:46:42.917685] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.870 [2024-12-07 05:46:42.917698] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.870 [2024-12-07 05:46:42.917706] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.870 [2024-12-07 05:46:42.917712] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.870 [2024-12-07 05:46:42.917726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.870 qpair failed and we were unable to recover it. 00:31:39.870 [2024-12-07 05:46:42.927669] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.870 [2024-12-07 05:46:42.927723] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.870 [2024-12-07 05:46:42.927737] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.870 [2024-12-07 05:46:42.927744] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.870 [2024-12-07 05:46:42.927751] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.870 [2024-12-07 05:46:42.927764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.870 qpair failed and we were unable to recover it. 00:31:39.870 [2024-12-07 05:46:42.937694] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.870 [2024-12-07 05:46:42.937757] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.870 [2024-12-07 05:46:42.937771] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.870 [2024-12-07 05:46:42.937778] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.870 [2024-12-07 05:46:42.937785] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.870 [2024-12-07 05:46:42.937798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.870 qpair failed and we were unable to recover it. 00:31:39.870 [2024-12-07 05:46:42.947719] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.870 [2024-12-07 05:46:42.947809] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.870 [2024-12-07 05:46:42.947823] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.870 [2024-12-07 05:46:42.947830] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.870 [2024-12-07 05:46:42.947837] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.870 [2024-12-07 05:46:42.947850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.870 qpair failed and we were unable to recover it. 00:31:39.870 [2024-12-07 05:46:42.957760] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.870 [2024-12-07 05:46:42.957822] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.870 [2024-12-07 05:46:42.957837] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.870 [2024-12-07 05:46:42.957844] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.870 [2024-12-07 05:46:42.957851] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.870 [2024-12-07 05:46:42.957864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.870 qpair failed and we were unable to recover it. 00:31:39.870 [2024-12-07 05:46:42.967776] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.870 [2024-12-07 05:46:42.967830] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.870 [2024-12-07 05:46:42.967844] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.870 [2024-12-07 05:46:42.967851] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.870 [2024-12-07 05:46:42.967858] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.870 [2024-12-07 05:46:42.967871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.870 qpair failed and we were unable to recover it. 00:31:39.870 [2024-12-07 05:46:42.977693] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.870 [2024-12-07 05:46:42.977753] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.870 [2024-12-07 05:46:42.977767] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.870 [2024-12-07 05:46:42.977774] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.870 [2024-12-07 05:46:42.977780] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.870 [2024-12-07 05:46:42.977793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.870 qpair failed and we were unable to recover it. 00:31:39.870 [2024-12-07 05:46:42.987859] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.870 [2024-12-07 05:46:42.987913] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.870 [2024-12-07 05:46:42.987928] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.870 [2024-12-07 05:46:42.987935] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.870 [2024-12-07 05:46:42.987941] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.870 [2024-12-07 05:46:42.987955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.870 qpair failed and we were unable to recover it. 00:31:39.870 [2024-12-07 05:46:42.997744] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.870 [2024-12-07 05:46:42.997803] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.870 [2024-12-07 05:46:42.997818] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.870 [2024-12-07 05:46:42.997825] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.870 [2024-12-07 05:46:42.997835] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.870 [2024-12-07 05:46:42.997849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.870 qpair failed and we were unable to recover it. 00:31:39.870 [2024-12-07 05:46:43.007890] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.870 [2024-12-07 05:46:43.007942] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.870 [2024-12-07 05:46:43.007957] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.871 [2024-12-07 05:46:43.007964] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.871 [2024-12-07 05:46:43.007971] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.871 [2024-12-07 05:46:43.007984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.871 qpair failed and we were unable to recover it. 00:31:39.871 [2024-12-07 05:46:43.017790] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.871 [2024-12-07 05:46:43.017856] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.871 [2024-12-07 05:46:43.017870] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.871 [2024-12-07 05:46:43.017877] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.871 [2024-12-07 05:46:43.017884] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.871 [2024-12-07 05:46:43.017897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.871 qpair failed and we were unable to recover it. 00:31:39.871 [2024-12-07 05:46:43.027942] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.871 [2024-12-07 05:46:43.028014] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.871 [2024-12-07 05:46:43.028031] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.871 [2024-12-07 05:46:43.028039] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.871 [2024-12-07 05:46:43.028050] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.871 [2024-12-07 05:46:43.028064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.871 qpair failed and we were unable to recover it. 00:31:39.871 [2024-12-07 05:46:43.037956] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.871 [2024-12-07 05:46:43.038018] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.871 [2024-12-07 05:46:43.038035] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.871 [2024-12-07 05:46:43.038043] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.871 [2024-12-07 05:46:43.038049] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.871 [2024-12-07 05:46:43.038064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.871 qpair failed and we were unable to recover it. 00:31:39.871 [2024-12-07 05:46:43.047998] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.871 [2024-12-07 05:46:43.048063] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.871 [2024-12-07 05:46:43.048077] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.871 [2024-12-07 05:46:43.048085] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.871 [2024-12-07 05:46:43.048091] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.871 [2024-12-07 05:46:43.048105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.871 qpair failed and we were unable to recover it. 00:31:39.871 [2024-12-07 05:46:43.058051] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.871 [2024-12-07 05:46:43.058114] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.871 [2024-12-07 05:46:43.058128] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.871 [2024-12-07 05:46:43.058135] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.871 [2024-12-07 05:46:43.058142] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.871 [2024-12-07 05:46:43.058156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.871 qpair failed and we were unable to recover it. 00:31:39.871 [2024-12-07 05:46:43.068003] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.871 [2024-12-07 05:46:43.068073] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.871 [2024-12-07 05:46:43.068087] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.871 [2024-12-07 05:46:43.068094] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.871 [2024-12-07 05:46:43.068101] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.871 [2024-12-07 05:46:43.068114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.871 qpair failed and we were unable to recover it. 00:31:39.871 [2024-12-07 05:46:43.078116] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.871 [2024-12-07 05:46:43.078168] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.871 [2024-12-07 05:46:43.078182] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.871 [2024-12-07 05:46:43.078189] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.871 [2024-12-07 05:46:43.078196] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.871 [2024-12-07 05:46:43.078209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.871 qpair failed and we were unable to recover it. 00:31:39.871 [2024-12-07 05:46:43.088148] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.871 [2024-12-07 05:46:43.088209] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.871 [2024-12-07 05:46:43.088223] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.871 [2024-12-07 05:46:43.088231] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.871 [2024-12-07 05:46:43.088241] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.871 [2024-12-07 05:46:43.088255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.871 qpair failed and we were unable to recover it. 00:31:39.871 [2024-12-07 05:46:43.098163] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.871 [2024-12-07 05:46:43.098227] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.871 [2024-12-07 05:46:43.098242] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.871 [2024-12-07 05:46:43.098250] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.871 [2024-12-07 05:46:43.098257] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:39.871 [2024-12-07 05:46:43.098271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.871 qpair failed and we were unable to recover it. 00:31:40.132 [2024-12-07 05:46:43.108201] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.132 [2024-12-07 05:46:43.108263] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.132 [2024-12-07 05:46:43.108277] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.132 [2024-12-07 05:46:43.108285] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.132 [2024-12-07 05:46:43.108291] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.132 [2024-12-07 05:46:43.108305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.132 qpair failed and we were unable to recover it. 00:31:40.132 [2024-12-07 05:46:43.118087] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.132 [2024-12-07 05:46:43.118143] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.132 [2024-12-07 05:46:43.118157] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.132 [2024-12-07 05:46:43.118164] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.132 [2024-12-07 05:46:43.118170] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.132 [2024-12-07 05:46:43.118184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.133 qpair failed and we were unable to recover it. 00:31:40.133 [2024-12-07 05:46:43.128247] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.133 [2024-12-07 05:46:43.128298] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.133 [2024-12-07 05:46:43.128312] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.133 [2024-12-07 05:46:43.128320] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.133 [2024-12-07 05:46:43.128326] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.133 [2024-12-07 05:46:43.128340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.133 qpair failed and we were unable to recover it. 00:31:40.133 [2024-12-07 05:46:43.138254] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.133 [2024-12-07 05:46:43.138372] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.133 [2024-12-07 05:46:43.138387] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.133 [2024-12-07 05:46:43.138394] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.133 [2024-12-07 05:46:43.138401] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.133 [2024-12-07 05:46:43.138415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.133 qpair failed and we were unable to recover it. 00:31:40.133 [2024-12-07 05:46:43.148390] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.133 [2024-12-07 05:46:43.148445] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.133 [2024-12-07 05:46:43.148460] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.133 [2024-12-07 05:46:43.148467] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.133 [2024-12-07 05:46:43.148473] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.133 [2024-12-07 05:46:43.148486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.133 qpair failed and we were unable to recover it. 00:31:40.133 [2024-12-07 05:46:43.158309] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.133 [2024-12-07 05:46:43.158365] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.133 [2024-12-07 05:46:43.158379] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.133 [2024-12-07 05:46:43.158386] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.133 [2024-12-07 05:46:43.158392] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.133 [2024-12-07 05:46:43.158406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.133 qpair failed and we were unable to recover it. 00:31:40.133 [2024-12-07 05:46:43.168331] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.133 [2024-12-07 05:46:43.168380] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.133 [2024-12-07 05:46:43.168394] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.133 [2024-12-07 05:46:43.168402] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.133 [2024-12-07 05:46:43.168408] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.133 [2024-12-07 05:46:43.168421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.133 qpair failed and we were unable to recover it. 00:31:40.133 [2024-12-07 05:46:43.178378] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.133 [2024-12-07 05:46:43.178435] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.133 [2024-12-07 05:46:43.178450] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.133 [2024-12-07 05:46:43.178457] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.133 [2024-12-07 05:46:43.178467] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.133 [2024-12-07 05:46:43.178480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.133 qpair failed and we were unable to recover it. 00:31:40.133 [2024-12-07 05:46:43.188408] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.133 [2024-12-07 05:46:43.188513] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.133 [2024-12-07 05:46:43.188528] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.133 [2024-12-07 05:46:43.188535] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.133 [2024-12-07 05:46:43.188542] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.133 [2024-12-07 05:46:43.188555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.133 qpair failed and we were unable to recover it. 00:31:40.133 [2024-12-07 05:46:43.198440] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.133 [2024-12-07 05:46:43.198487] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.133 [2024-12-07 05:46:43.198501] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.133 [2024-12-07 05:46:43.198509] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.133 [2024-12-07 05:46:43.198515] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.133 [2024-12-07 05:46:43.198528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.133 qpair failed and we were unable to recover it. 00:31:40.133 [2024-12-07 05:46:43.208464] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.133 [2024-12-07 05:46:43.208515] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.133 [2024-12-07 05:46:43.208529] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.133 [2024-12-07 05:46:43.208536] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.133 [2024-12-07 05:46:43.208543] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.133 [2024-12-07 05:46:43.208556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.133 qpair failed and we were unable to recover it. 00:31:40.133 [2024-12-07 05:46:43.218448] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.133 [2024-12-07 05:46:43.218507] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.133 [2024-12-07 05:46:43.218522] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.133 [2024-12-07 05:46:43.218529] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.133 [2024-12-07 05:46:43.218535] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.133 [2024-12-07 05:46:43.218548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.133 qpair failed and we were unable to recover it. 00:31:40.133 [2024-12-07 05:46:43.228484] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.133 [2024-12-07 05:46:43.228549] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.133 [2024-12-07 05:46:43.228563] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.133 [2024-12-07 05:46:43.228570] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.133 [2024-12-07 05:46:43.228577] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.133 [2024-12-07 05:46:43.228590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.133 qpair failed and we were unable to recover it. 00:31:40.133 [2024-12-07 05:46:43.238543] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.133 [2024-12-07 05:46:43.238599] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.133 [2024-12-07 05:46:43.238613] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.133 [2024-12-07 05:46:43.238620] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.133 [2024-12-07 05:46:43.238627] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.133 [2024-12-07 05:46:43.238640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.133 qpair failed and we were unable to recover it. 00:31:40.133 [2024-12-07 05:46:43.248580] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.133 [2024-12-07 05:46:43.248633] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.133 [2024-12-07 05:46:43.248647] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.133 [2024-12-07 05:46:43.248654] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.133 [2024-12-07 05:46:43.248661] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.133 [2024-12-07 05:46:43.248674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.133 qpair failed and we were unable to recover it. 00:31:40.133 [2024-12-07 05:46:43.258484] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.134 [2024-12-07 05:46:43.258543] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.134 [2024-12-07 05:46:43.258557] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.134 [2024-12-07 05:46:43.258564] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.134 [2024-12-07 05:46:43.258571] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.134 [2024-12-07 05:46:43.258584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.134 qpair failed and we were unable to recover it. 00:31:40.134 [2024-12-07 05:46:43.268511] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.134 [2024-12-07 05:46:43.268571] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.134 [2024-12-07 05:46:43.268585] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.134 [2024-12-07 05:46:43.268592] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.134 [2024-12-07 05:46:43.268602] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.134 [2024-12-07 05:46:43.268615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.134 qpair failed and we were unable to recover it. 00:31:40.134 [2024-12-07 05:46:43.278549] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.134 [2024-12-07 05:46:43.278605] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.134 [2024-12-07 05:46:43.278619] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.134 [2024-12-07 05:46:43.278626] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.134 [2024-12-07 05:46:43.278633] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.134 [2024-12-07 05:46:43.278646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.134 qpair failed and we were unable to recover it. 00:31:40.134 [2024-12-07 05:46:43.288660] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.134 [2024-12-07 05:46:43.288764] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.134 [2024-12-07 05:46:43.288778] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.134 [2024-12-07 05:46:43.288786] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.134 [2024-12-07 05:46:43.288793] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.134 [2024-12-07 05:46:43.288807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.134 qpair failed and we were unable to recover it. 00:31:40.134 [2024-12-07 05:46:43.298719] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.134 [2024-12-07 05:46:43.298785] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.134 [2024-12-07 05:46:43.298800] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.134 [2024-12-07 05:46:43.298807] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.134 [2024-12-07 05:46:43.298814] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.134 [2024-12-07 05:46:43.298827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.134 qpair failed and we were unable to recover it. 00:31:40.134 [2024-12-07 05:46:43.308635] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.134 [2024-12-07 05:46:43.308688] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.134 [2024-12-07 05:46:43.308701] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.134 [2024-12-07 05:46:43.308709] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.134 [2024-12-07 05:46:43.308715] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.134 [2024-12-07 05:46:43.308728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.134 qpair failed and we were unable to recover it. 00:31:40.134 [2024-12-07 05:46:43.318699] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.134 [2024-12-07 05:46:43.318757] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.134 [2024-12-07 05:46:43.318771] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.134 [2024-12-07 05:46:43.318779] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.134 [2024-12-07 05:46:43.318786] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.134 [2024-12-07 05:46:43.318799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.134 qpair failed and we were unable to recover it. 00:31:40.134 [2024-12-07 05:46:43.328803] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.134 [2024-12-07 05:46:43.328858] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.134 [2024-12-07 05:46:43.328872] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.134 [2024-12-07 05:46:43.328879] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.134 [2024-12-07 05:46:43.328886] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.134 [2024-12-07 05:46:43.328899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.134 qpair failed and we were unable to recover it. 00:31:40.134 [2024-12-07 05:46:43.338720] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.134 [2024-12-07 05:46:43.338783] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.134 [2024-12-07 05:46:43.338798] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.134 [2024-12-07 05:46:43.338805] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.134 [2024-12-07 05:46:43.338811] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.134 [2024-12-07 05:46:43.338825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.134 qpair failed and we were unable to recover it. 00:31:40.134 [2024-12-07 05:46:43.348756] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.134 [2024-12-07 05:46:43.348818] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.134 [2024-12-07 05:46:43.348834] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.134 [2024-12-07 05:46:43.348841] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.134 [2024-12-07 05:46:43.348847] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.134 [2024-12-07 05:46:43.348862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.134 qpair failed and we were unable to recover it. 00:31:40.134 [2024-12-07 05:46:43.358909] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.134 [2024-12-07 05:46:43.358962] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.134 [2024-12-07 05:46:43.358976] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.134 [2024-12-07 05:46:43.358987] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.134 [2024-12-07 05:46:43.358994] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.134 [2024-12-07 05:46:43.359008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.134 qpair failed and we were unable to recover it. 00:31:40.396 [2024-12-07 05:46:43.368926] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.396 [2024-12-07 05:46:43.368981] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.396 [2024-12-07 05:46:43.368996] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.396 [2024-12-07 05:46:43.369003] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.396 [2024-12-07 05:46:43.369009] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.396 [2024-12-07 05:46:43.369028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.396 qpair failed and we were unable to recover it. 00:31:40.396 [2024-12-07 05:46:43.378969] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.396 [2024-12-07 05:46:43.379034] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.396 [2024-12-07 05:46:43.379048] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.396 [2024-12-07 05:46:43.379056] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.396 [2024-12-07 05:46:43.379062] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.396 [2024-12-07 05:46:43.379076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.396 qpair failed and we were unable to recover it. 00:31:40.396 [2024-12-07 05:46:43.388994] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.396 [2024-12-07 05:46:43.389059] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.396 [2024-12-07 05:46:43.389074] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.396 [2024-12-07 05:46:43.389082] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.396 [2024-12-07 05:46:43.389090] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.396 [2024-12-07 05:46:43.389106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.396 qpair failed and we were unable to recover it. 00:31:40.396 [2024-12-07 05:46:43.399046] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.396 [2024-12-07 05:46:43.399131] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.396 [2024-12-07 05:46:43.399146] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.396 [2024-12-07 05:46:43.399153] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.396 [2024-12-07 05:46:43.399160] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.396 [2024-12-07 05:46:43.399173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.396 qpair failed and we were unable to recover it. 00:31:40.396 [2024-12-07 05:46:43.409045] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.396 [2024-12-07 05:46:43.409097] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.396 [2024-12-07 05:46:43.409111] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.397 [2024-12-07 05:46:43.409118] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.397 [2024-12-07 05:46:43.409125] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.397 [2024-12-07 05:46:43.409139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.397 qpair failed and we were unable to recover it. 00:31:40.397 [2024-12-07 05:46:43.419064] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.397 [2024-12-07 05:46:43.419123] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.397 [2024-12-07 05:46:43.419137] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.397 [2024-12-07 05:46:43.419144] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.397 [2024-12-07 05:46:43.419151] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.397 [2024-12-07 05:46:43.419164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.397 qpair failed and we were unable to recover it. 00:31:40.397 [2024-12-07 05:46:43.429100] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.397 [2024-12-07 05:46:43.429160] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.397 [2024-12-07 05:46:43.429174] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.397 [2024-12-07 05:46:43.429181] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.397 [2024-12-07 05:46:43.429188] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.397 [2024-12-07 05:46:43.429202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.397 qpair failed and we were unable to recover it. 00:31:40.397 [2024-12-07 05:46:43.439108] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.397 [2024-12-07 05:46:43.439190] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.397 [2024-12-07 05:46:43.439204] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.397 [2024-12-07 05:46:43.439211] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.397 [2024-12-07 05:46:43.439218] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.397 [2024-12-07 05:46:43.439232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.397 qpair failed and we were unable to recover it. 00:31:40.397 [2024-12-07 05:46:43.449151] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.397 [2024-12-07 05:46:43.449202] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.397 [2024-12-07 05:46:43.449216] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.397 [2024-12-07 05:46:43.449227] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.397 [2024-12-07 05:46:43.449233] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.397 [2024-12-07 05:46:43.449246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.397 qpair failed and we were unable to recover it. 00:31:40.397 [2024-12-07 05:46:43.459241] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.397 [2024-12-07 05:46:43.459302] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.397 [2024-12-07 05:46:43.459316] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.397 [2024-12-07 05:46:43.459323] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.397 [2024-12-07 05:46:43.459330] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.397 [2024-12-07 05:46:43.459343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.397 qpair failed and we were unable to recover it. 00:31:40.397 [2024-12-07 05:46:43.469232] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.397 [2024-12-07 05:46:43.469292] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.397 [2024-12-07 05:46:43.469306] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.397 [2024-12-07 05:46:43.469313] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.397 [2024-12-07 05:46:43.469319] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.397 [2024-12-07 05:46:43.469332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.397 qpair failed and we were unable to recover it. 00:31:40.397 [2024-12-07 05:46:43.479232] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.397 [2024-12-07 05:46:43.479286] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.397 [2024-12-07 05:46:43.479300] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.397 [2024-12-07 05:46:43.479308] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.397 [2024-12-07 05:46:43.479314] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.397 [2024-12-07 05:46:43.479328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.397 qpair failed and we were unable to recover it. 00:31:40.397 [2024-12-07 05:46:43.489251] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.397 [2024-12-07 05:46:43.489306] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.397 [2024-12-07 05:46:43.489321] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.397 [2024-12-07 05:46:43.489328] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.397 [2024-12-07 05:46:43.489335] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.397 [2024-12-07 05:46:43.489348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.397 qpair failed and we were unable to recover it. 00:31:40.397 [2024-12-07 05:46:43.499303] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.397 [2024-12-07 05:46:43.499363] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.397 [2024-12-07 05:46:43.499377] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.397 [2024-12-07 05:46:43.499384] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.397 [2024-12-07 05:46:43.499391] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.397 [2024-12-07 05:46:43.499404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.397 qpair failed and we were unable to recover it. 00:31:40.397 [2024-12-07 05:46:43.509343] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.397 [2024-12-07 05:46:43.509443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.397 [2024-12-07 05:46:43.509457] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.397 [2024-12-07 05:46:43.509464] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.397 [2024-12-07 05:46:43.509470] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.397 [2024-12-07 05:46:43.509483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.397 qpair failed and we were unable to recover it. 00:31:40.397 [2024-12-07 05:46:43.519318] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.397 [2024-12-07 05:46:43.519376] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.397 [2024-12-07 05:46:43.519390] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.397 [2024-12-07 05:46:43.519397] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.397 [2024-12-07 05:46:43.519403] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.397 [2024-12-07 05:46:43.519417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.397 qpair failed and we were unable to recover it. 00:31:40.397 [2024-12-07 05:46:43.529358] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.398 [2024-12-07 05:46:43.529415] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.398 [2024-12-07 05:46:43.529429] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.398 [2024-12-07 05:46:43.529436] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.398 [2024-12-07 05:46:43.529443] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.398 [2024-12-07 05:46:43.529456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.398 qpair failed and we were unable to recover it. 00:31:40.398 [2024-12-07 05:46:43.539336] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.398 [2024-12-07 05:46:43.539431] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.398 [2024-12-07 05:46:43.539445] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.398 [2024-12-07 05:46:43.539456] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.398 [2024-12-07 05:46:43.539463] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.398 [2024-12-07 05:46:43.539476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.398 qpair failed and we were unable to recover it. 00:31:40.398 [2024-12-07 05:46:43.549444] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.398 [2024-12-07 05:46:43.549499] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.398 [2024-12-07 05:46:43.549513] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.398 [2024-12-07 05:46:43.549520] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.398 [2024-12-07 05:46:43.549527] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.398 [2024-12-07 05:46:43.549540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.398 qpair failed and we were unable to recover it. 00:31:40.398 [2024-12-07 05:46:43.559466] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.398 [2024-12-07 05:46:43.559518] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.398 [2024-12-07 05:46:43.559532] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.398 [2024-12-07 05:46:43.559539] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.398 [2024-12-07 05:46:43.559546] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.398 [2024-12-07 05:46:43.559559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.398 qpair failed and we were unable to recover it. 00:31:40.398 [2024-12-07 05:46:43.569378] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.398 [2024-12-07 05:46:43.569427] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.398 [2024-12-07 05:46:43.569440] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.398 [2024-12-07 05:46:43.569448] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.398 [2024-12-07 05:46:43.569454] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.398 [2024-12-07 05:46:43.569467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.398 qpair failed and we were unable to recover it. 00:31:40.398 [2024-12-07 05:46:43.579524] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.398 [2024-12-07 05:46:43.579586] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.398 [2024-12-07 05:46:43.579601] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.398 [2024-12-07 05:46:43.579608] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.398 [2024-12-07 05:46:43.579614] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.398 [2024-12-07 05:46:43.579628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.398 qpair failed and we were unable to recover it. 00:31:40.398 [2024-12-07 05:46:43.589475] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.398 [2024-12-07 05:46:43.589539] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.398 [2024-12-07 05:46:43.589554] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.398 [2024-12-07 05:46:43.589561] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.398 [2024-12-07 05:46:43.589567] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.398 [2024-12-07 05:46:43.589581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.398 qpair failed and we were unable to recover it. 00:31:40.398 [2024-12-07 05:46:43.599584] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.398 [2024-12-07 05:46:43.599638] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.398 [2024-12-07 05:46:43.599652] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.398 [2024-12-07 05:46:43.599659] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.398 [2024-12-07 05:46:43.599666] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.398 [2024-12-07 05:46:43.599679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.398 qpair failed and we were unable to recover it. 00:31:40.398 [2024-12-07 05:46:43.609608] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.398 [2024-12-07 05:46:43.609665] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.398 [2024-12-07 05:46:43.609680] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.398 [2024-12-07 05:46:43.609687] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.398 [2024-12-07 05:46:43.609694] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.398 [2024-12-07 05:46:43.609707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.398 qpair failed and we were unable to recover it. 00:31:40.398 [2024-12-07 05:46:43.619631] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.398 [2024-12-07 05:46:43.619692] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.398 [2024-12-07 05:46:43.619706] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.398 [2024-12-07 05:46:43.619713] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.398 [2024-12-07 05:46:43.619720] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.398 [2024-12-07 05:46:43.619733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.398 qpair failed and we were unable to recover it. 00:31:40.398 [2024-12-07 05:46:43.629657] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.398 [2024-12-07 05:46:43.629721] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.398 [2024-12-07 05:46:43.629736] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.398 [2024-12-07 05:46:43.629750] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.398 [2024-12-07 05:46:43.629756] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.398 [2024-12-07 05:46:43.629770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.398 qpair failed and we were unable to recover it. 00:31:40.661 [2024-12-07 05:46:43.639678] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.661 [2024-12-07 05:46:43.639742] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.661 [2024-12-07 05:46:43.639756] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.661 [2024-12-07 05:46:43.639764] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.661 [2024-12-07 05:46:43.639770] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.661 [2024-12-07 05:46:43.639784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.661 qpair failed and we were unable to recover it. 00:31:40.661 [2024-12-07 05:46:43.649717] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.661 [2024-12-07 05:46:43.649773] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.661 [2024-12-07 05:46:43.649788] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.661 [2024-12-07 05:46:43.649795] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.661 [2024-12-07 05:46:43.649802] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.661 [2024-12-07 05:46:43.649815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.661 qpair failed and we were unable to recover it. 00:31:40.661 [2024-12-07 05:46:43.659756] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.661 [2024-12-07 05:46:43.659818] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.661 [2024-12-07 05:46:43.659833] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.661 [2024-12-07 05:46:43.659840] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.661 [2024-12-07 05:46:43.659847] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.661 [2024-12-07 05:46:43.659862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.661 qpair failed and we were unable to recover it. 00:31:40.661 [2024-12-07 05:46:43.669650] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.661 [2024-12-07 05:46:43.669714] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.661 [2024-12-07 05:46:43.669728] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.661 [2024-12-07 05:46:43.669736] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.661 [2024-12-07 05:46:43.669742] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.661 [2024-12-07 05:46:43.669756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.661 qpair failed and we were unable to recover it. 00:31:40.661 [2024-12-07 05:46:43.679708] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.661 [2024-12-07 05:46:43.679764] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.661 [2024-12-07 05:46:43.679779] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.661 [2024-12-07 05:46:43.679786] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.661 [2024-12-07 05:46:43.679792] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.661 [2024-12-07 05:46:43.679806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.661 qpair failed and we were unable to recover it. 00:31:40.661 [2024-12-07 05:46:43.689833] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.661 [2024-12-07 05:46:43.689886] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.661 [2024-12-07 05:46:43.689901] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.661 [2024-12-07 05:46:43.689908] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.661 [2024-12-07 05:46:43.689915] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.661 [2024-12-07 05:46:43.689928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.661 qpair failed and we were unable to recover it. 00:31:40.661 [2024-12-07 05:46:43.699858] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.661 [2024-12-07 05:46:43.699919] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.661 [2024-12-07 05:46:43.699934] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.661 [2024-12-07 05:46:43.699941] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.661 [2024-12-07 05:46:43.699947] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.661 [2024-12-07 05:46:43.699960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.661 qpair failed and we were unable to recover it. 00:31:40.661 [2024-12-07 05:46:43.709892] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.661 [2024-12-07 05:46:43.709954] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.661 [2024-12-07 05:46:43.709969] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.661 [2024-12-07 05:46:43.709976] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.661 [2024-12-07 05:46:43.709982] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.661 [2024-12-07 05:46:43.709995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.661 qpair failed and we were unable to recover it. 00:31:40.661 [2024-12-07 05:46:43.719848] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.661 [2024-12-07 05:46:43.719904] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.661 [2024-12-07 05:46:43.719919] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.661 [2024-12-07 05:46:43.719929] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.661 [2024-12-07 05:46:43.719936] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.661 [2024-12-07 05:46:43.719949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.661 qpair failed and we were unable to recover it. 00:31:40.661 [2024-12-07 05:46:43.729924] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.661 [2024-12-07 05:46:43.729978] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.661 [2024-12-07 05:46:43.729993] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.661 [2024-12-07 05:46:43.730001] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.661 [2024-12-07 05:46:43.730015] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.661 [2024-12-07 05:46:43.730029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.661 qpair failed and we were unable to recover it. 00:31:40.661 [2024-12-07 05:46:43.739959] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.661 [2024-12-07 05:46:43.740046] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.661 [2024-12-07 05:46:43.740061] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.661 [2024-12-07 05:46:43.740068] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.661 [2024-12-07 05:46:43.740075] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.661 [2024-12-07 05:46:43.740089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.661 qpair failed and we were unable to recover it. 00:31:40.661 [2024-12-07 05:46:43.749988] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.662 [2024-12-07 05:46:43.750053] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.662 [2024-12-07 05:46:43.750068] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.662 [2024-12-07 05:46:43.750075] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.662 [2024-12-07 05:46:43.750081] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.662 [2024-12-07 05:46:43.750095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.662 qpair failed and we were unable to recover it. 00:31:40.662 [2024-12-07 05:46:43.760028] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.662 [2024-12-07 05:46:43.760082] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.662 [2024-12-07 05:46:43.760097] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.662 [2024-12-07 05:46:43.760104] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.662 [2024-12-07 05:46:43.760111] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.662 [2024-12-07 05:46:43.760125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.662 qpair failed and we were unable to recover it. 00:31:40.662 [2024-12-07 05:46:43.770034] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.662 [2024-12-07 05:46:43.770111] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.662 [2024-12-07 05:46:43.770125] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.662 [2024-12-07 05:46:43.770133] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.662 [2024-12-07 05:46:43.770139] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.662 [2024-12-07 05:46:43.770153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.662 qpair failed and we were unable to recover it. 00:31:40.662 [2024-12-07 05:46:43.780120] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.662 [2024-12-07 05:46:43.780180] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.662 [2024-12-07 05:46:43.780194] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.662 [2024-12-07 05:46:43.780201] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.662 [2024-12-07 05:46:43.780208] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.662 [2024-12-07 05:46:43.780222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.662 qpair failed and we were unable to recover it. 00:31:40.662 [2024-12-07 05:46:43.790111] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.662 [2024-12-07 05:46:43.790175] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.662 [2024-12-07 05:46:43.790190] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.662 [2024-12-07 05:46:43.790197] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.662 [2024-12-07 05:46:43.790204] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.662 [2024-12-07 05:46:43.790217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.662 qpair failed and we were unable to recover it. 00:31:40.662 [2024-12-07 05:46:43.800104] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.662 [2024-12-07 05:46:43.800154] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.662 [2024-12-07 05:46:43.800168] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.662 [2024-12-07 05:46:43.800176] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.662 [2024-12-07 05:46:43.800183] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.662 [2024-12-07 05:46:43.800196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.662 qpair failed and we were unable to recover it. 00:31:40.662 [2024-12-07 05:46:43.810149] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.662 [2024-12-07 05:46:43.810202] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.662 [2024-12-07 05:46:43.810216] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.662 [2024-12-07 05:46:43.810227] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.662 [2024-12-07 05:46:43.810233] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.662 [2024-12-07 05:46:43.810246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.662 qpair failed and we were unable to recover it. 00:31:40.662 [2024-12-07 05:46:43.820214] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.662 [2024-12-07 05:46:43.820317] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.662 [2024-12-07 05:46:43.820331] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.662 [2024-12-07 05:46:43.820338] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.662 [2024-12-07 05:46:43.820345] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.662 [2024-12-07 05:46:43.820358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.662 qpair failed and we were unable to recover it. 00:31:40.662 [2024-12-07 05:46:43.830216] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.662 [2024-12-07 05:46:43.830312] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.662 [2024-12-07 05:46:43.830326] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.662 [2024-12-07 05:46:43.830333] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.662 [2024-12-07 05:46:43.830339] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.662 [2024-12-07 05:46:43.830353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.662 qpair failed and we were unable to recover it. 00:31:40.662 [2024-12-07 05:46:43.840245] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.662 [2024-12-07 05:46:43.840321] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.662 [2024-12-07 05:46:43.840336] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.662 [2024-12-07 05:46:43.840343] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.662 [2024-12-07 05:46:43.840350] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.662 [2024-12-07 05:46:43.840363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.662 qpair failed and we were unable to recover it. 00:31:40.662 [2024-12-07 05:46:43.850305] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.662 [2024-12-07 05:46:43.850379] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.662 [2024-12-07 05:46:43.850394] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.662 [2024-12-07 05:46:43.850401] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.662 [2024-12-07 05:46:43.850407] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.662 [2024-12-07 05:46:43.850420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.662 qpair failed and we were unable to recover it. 00:31:40.662 [2024-12-07 05:46:43.860318] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.662 [2024-12-07 05:46:43.860375] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.662 [2024-12-07 05:46:43.860390] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.662 [2024-12-07 05:46:43.860397] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.662 [2024-12-07 05:46:43.860403] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.662 [2024-12-07 05:46:43.860417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.662 qpair failed and we were unable to recover it. 00:31:40.662 [2024-12-07 05:46:43.870325] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.662 [2024-12-07 05:46:43.870383] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.662 [2024-12-07 05:46:43.870397] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.662 [2024-12-07 05:46:43.870404] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.662 [2024-12-07 05:46:43.870411] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.662 [2024-12-07 05:46:43.870424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.662 qpair failed and we were unable to recover it. 00:31:40.662 [2024-12-07 05:46:43.880376] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.662 [2024-12-07 05:46:43.880428] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.662 [2024-12-07 05:46:43.880442] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.663 [2024-12-07 05:46:43.880450] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.663 [2024-12-07 05:46:43.880456] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.663 [2024-12-07 05:46:43.880470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.663 qpair failed and we were unable to recover it. 00:31:40.663 [2024-12-07 05:46:43.890378] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.663 [2024-12-07 05:46:43.890430] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.663 [2024-12-07 05:46:43.890445] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.663 [2024-12-07 05:46:43.890452] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.663 [2024-12-07 05:46:43.890459] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.663 [2024-12-07 05:46:43.890472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.663 qpair failed and we were unable to recover it. 00:31:40.925 [2024-12-07 05:46:43.900289] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.925 [2024-12-07 05:46:43.900349] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.925 [2024-12-07 05:46:43.900367] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.925 [2024-12-07 05:46:43.900375] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.925 [2024-12-07 05:46:43.900381] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.925 [2024-12-07 05:46:43.900395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.925 qpair failed and we were unable to recover it. 00:31:40.925 [2024-12-07 05:46:43.910462] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.925 [2024-12-07 05:46:43.910521] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.925 [2024-12-07 05:46:43.910535] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.925 [2024-12-07 05:46:43.910542] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.925 [2024-12-07 05:46:43.910548] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.925 [2024-12-07 05:46:43.910561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.925 qpair failed and we were unable to recover it. 00:31:40.925 [2024-12-07 05:46:43.920454] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.925 [2024-12-07 05:46:43.920552] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.925 [2024-12-07 05:46:43.920566] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.925 [2024-12-07 05:46:43.920573] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.925 [2024-12-07 05:46:43.920579] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.925 [2024-12-07 05:46:43.920593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.925 qpair failed and we were unable to recover it. 00:31:40.925 [2024-12-07 05:46:43.930474] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.925 [2024-12-07 05:46:43.930574] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.925 [2024-12-07 05:46:43.930589] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.925 [2024-12-07 05:46:43.930596] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.925 [2024-12-07 05:46:43.930602] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.925 [2024-12-07 05:46:43.930615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.925 qpair failed and we were unable to recover it. 00:31:40.925 [2024-12-07 05:46:43.940556] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.925 [2024-12-07 05:46:43.940637] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.925 [2024-12-07 05:46:43.940652] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.925 [2024-12-07 05:46:43.940659] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.925 [2024-12-07 05:46:43.940666] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.925 [2024-12-07 05:46:43.940679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.925 qpair failed and we were unable to recover it. 00:31:40.925 [2024-12-07 05:46:43.950533] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.926 [2024-12-07 05:46:43.950593] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.926 [2024-12-07 05:46:43.950608] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.926 [2024-12-07 05:46:43.950615] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.926 [2024-12-07 05:46:43.950621] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.926 [2024-12-07 05:46:43.950635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.926 qpair failed and we were unable to recover it. 00:31:40.926 [2024-12-07 05:46:43.960597] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.926 [2024-12-07 05:46:43.960651] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.926 [2024-12-07 05:46:43.960665] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.926 [2024-12-07 05:46:43.960672] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.926 [2024-12-07 05:46:43.960679] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.926 [2024-12-07 05:46:43.960692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.926 qpair failed and we were unable to recover it. 00:31:40.926 [2024-12-07 05:46:43.970625] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.926 [2024-12-07 05:46:43.970681] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.926 [2024-12-07 05:46:43.970695] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.926 [2024-12-07 05:46:43.970702] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.926 [2024-12-07 05:46:43.970709] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.926 [2024-12-07 05:46:43.970722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.926 qpair failed and we were unable to recover it. 00:31:40.926 [2024-12-07 05:46:43.980529] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.926 [2024-12-07 05:46:43.980588] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.926 [2024-12-07 05:46:43.980602] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.926 [2024-12-07 05:46:43.980609] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.926 [2024-12-07 05:46:43.980616] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.926 [2024-12-07 05:46:43.980630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.926 qpair failed and we were unable to recover it. 00:31:40.926 [2024-12-07 05:46:43.990680] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.926 [2024-12-07 05:46:43.990739] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.926 [2024-12-07 05:46:43.990757] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.926 [2024-12-07 05:46:43.990764] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.926 [2024-12-07 05:46:43.990771] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.926 [2024-12-07 05:46:43.990785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.926 qpair failed and we were unable to recover it. 00:31:40.926 [2024-12-07 05:46:44.000718] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.926 [2024-12-07 05:46:44.000783] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.926 [2024-12-07 05:46:44.000809] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.926 [2024-12-07 05:46:44.000818] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.926 [2024-12-07 05:46:44.000826] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.926 [2024-12-07 05:46:44.000844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.926 qpair failed and we were unable to recover it. 00:31:40.926 [2024-12-07 05:46:44.010741] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.926 [2024-12-07 05:46:44.010800] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.926 [2024-12-07 05:46:44.010827] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.926 [2024-12-07 05:46:44.010836] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.926 [2024-12-07 05:46:44.010843] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.926 [2024-12-07 05:46:44.010861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.926 qpair failed and we were unable to recover it. 00:31:40.926 [2024-12-07 05:46:44.020756] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.926 [2024-12-07 05:46:44.020822] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.926 [2024-12-07 05:46:44.020838] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.926 [2024-12-07 05:46:44.020846] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.926 [2024-12-07 05:46:44.020853] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.926 [2024-12-07 05:46:44.020868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.926 qpair failed and we were unable to recover it. 00:31:40.926 [2024-12-07 05:46:44.030787] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.926 [2024-12-07 05:46:44.030890] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.926 [2024-12-07 05:46:44.030905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.926 [2024-12-07 05:46:44.030913] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.926 [2024-12-07 05:46:44.030919] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.926 [2024-12-07 05:46:44.030933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.926 qpair failed and we were unable to recover it. 00:31:40.926 [2024-12-07 05:46:44.040826] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.926 [2024-12-07 05:46:44.040923] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.926 [2024-12-07 05:46:44.040938] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.926 [2024-12-07 05:46:44.040946] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.926 [2024-12-07 05:46:44.040952] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.926 [2024-12-07 05:46:44.040967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.926 qpair failed and we were unable to recover it. 00:31:40.926 [2024-12-07 05:46:44.050839] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.926 [2024-12-07 05:46:44.050894] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.926 [2024-12-07 05:46:44.050908] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.926 [2024-12-07 05:46:44.050916] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.926 [2024-12-07 05:46:44.050922] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.926 [2024-12-07 05:46:44.050935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.926 qpair failed and we were unable to recover it. 00:31:40.926 [2024-12-07 05:46:44.060868] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.927 [2024-12-07 05:46:44.060931] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.927 [2024-12-07 05:46:44.060946] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.927 [2024-12-07 05:46:44.060953] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.927 [2024-12-07 05:46:44.060960] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.927 [2024-12-07 05:46:44.060973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.927 qpair failed and we were unable to recover it. 00:31:40.927 [2024-12-07 05:46:44.070888] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.927 [2024-12-07 05:46:44.070940] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.927 [2024-12-07 05:46:44.070954] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.927 [2024-12-07 05:46:44.070961] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.927 [2024-12-07 05:46:44.070968] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.927 [2024-12-07 05:46:44.070981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.927 qpair failed and we were unable to recover it. 00:31:40.927 [2024-12-07 05:46:44.080916] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.927 [2024-12-07 05:46:44.080967] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.927 [2024-12-07 05:46:44.080985] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.927 [2024-12-07 05:46:44.080993] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.927 [2024-12-07 05:46:44.080999] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.927 [2024-12-07 05:46:44.081018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.927 qpair failed and we were unable to recover it. 00:31:40.927 [2024-12-07 05:46:44.090961] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.927 [2024-12-07 05:46:44.091024] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.927 [2024-12-07 05:46:44.091039] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.927 [2024-12-07 05:46:44.091047] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.927 [2024-12-07 05:46:44.091053] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.927 [2024-12-07 05:46:44.091067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.927 qpair failed and we were unable to recover it. 00:31:40.927 [2024-12-07 05:46:44.100976] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.927 [2024-12-07 05:46:44.101071] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.927 [2024-12-07 05:46:44.101088] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.927 [2024-12-07 05:46:44.101095] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.927 [2024-12-07 05:46:44.101102] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.927 [2024-12-07 05:46:44.101117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.927 qpair failed and we were unable to recover it. 00:31:40.927 [2024-12-07 05:46:44.111009] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.927 [2024-12-07 05:46:44.111073] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.927 [2024-12-07 05:46:44.111088] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.927 [2024-12-07 05:46:44.111095] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.927 [2024-12-07 05:46:44.111102] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.927 [2024-12-07 05:46:44.111115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.927 qpair failed and we were unable to recover it. 00:31:40.927 [2024-12-07 05:46:44.121052] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.927 [2024-12-07 05:46:44.121107] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.927 [2024-12-07 05:46:44.121121] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.927 [2024-12-07 05:46:44.121128] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.927 [2024-12-07 05:46:44.121135] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.927 [2024-12-07 05:46:44.121153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.927 qpair failed and we were unable to recover it. 00:31:40.927 [2024-12-07 05:46:44.131063] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.927 [2024-12-07 05:46:44.131118] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.927 [2024-12-07 05:46:44.131133] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.927 [2024-12-07 05:46:44.131140] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.927 [2024-12-07 05:46:44.131147] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.927 [2024-12-07 05:46:44.131160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.927 qpair failed and we were unable to recover it. 00:31:40.927 [2024-12-07 05:46:44.141077] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.927 [2024-12-07 05:46:44.141140] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.927 [2024-12-07 05:46:44.141156] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.927 [2024-12-07 05:46:44.141163] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.927 [2024-12-07 05:46:44.141169] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.927 [2024-12-07 05:46:44.141183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.927 qpair failed and we were unable to recover it. 00:31:40.927 [2024-12-07 05:46:44.151026] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.927 [2024-12-07 05:46:44.151078] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.927 [2024-12-07 05:46:44.151093] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.927 [2024-12-07 05:46:44.151100] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.927 [2024-12-07 05:46:44.151106] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.927 [2024-12-07 05:46:44.151120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.927 qpair failed and we were unable to recover it. 00:31:40.927 [2024-12-07 05:46:44.161040] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.927 [2024-12-07 05:46:44.161092] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.927 [2024-12-07 05:46:44.161107] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.927 [2024-12-07 05:46:44.161114] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.927 [2024-12-07 05:46:44.161120] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:40.928 [2024-12-07 05:46:44.161134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.928 qpair failed and we were unable to recover it. 00:31:41.190 [2024-12-07 05:46:44.171196] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.190 [2024-12-07 05:46:44.171256] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.190 [2024-12-07 05:46:44.171275] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.190 [2024-12-07 05:46:44.171283] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.190 [2024-12-07 05:46:44.171289] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.190 [2024-12-07 05:46:44.171303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.191 qpair failed and we were unable to recover it. 00:31:41.191 [2024-12-07 05:46:44.181182] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.191 [2024-12-07 05:46:44.181244] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.191 [2024-12-07 05:46:44.181258] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.191 [2024-12-07 05:46:44.181266] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.191 [2024-12-07 05:46:44.181272] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.191 [2024-12-07 05:46:44.181286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.191 qpair failed and we were unable to recover it. 00:31:41.191 [2024-12-07 05:46:44.191266] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.191 [2024-12-07 05:46:44.191353] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.191 [2024-12-07 05:46:44.191367] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.191 [2024-12-07 05:46:44.191375] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.191 [2024-12-07 05:46:44.191381] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.191 [2024-12-07 05:46:44.191395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.191 qpair failed and we were unable to recover it. 00:31:41.191 [2024-12-07 05:46:44.201299] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.191 [2024-12-07 05:46:44.201353] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.191 [2024-12-07 05:46:44.201367] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.191 [2024-12-07 05:46:44.201374] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.191 [2024-12-07 05:46:44.201380] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.191 [2024-12-07 05:46:44.201394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.191 qpair failed and we were unable to recover it. 00:31:41.191 [2024-12-07 05:46:44.211171] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.191 [2024-12-07 05:46:44.211226] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.191 [2024-12-07 05:46:44.211240] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.191 [2024-12-07 05:46:44.211247] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.191 [2024-12-07 05:46:44.211253] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.191 [2024-12-07 05:46:44.211270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.191 qpair failed and we were unable to recover it. 00:31:41.191 [2024-12-07 05:46:44.221350] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.191 [2024-12-07 05:46:44.221413] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.191 [2024-12-07 05:46:44.221427] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.191 [2024-12-07 05:46:44.221434] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.191 [2024-12-07 05:46:44.221441] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.191 [2024-12-07 05:46:44.221454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.191 qpair failed and we were unable to recover it. 00:31:41.191 [2024-12-07 05:46:44.231360] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.191 [2024-12-07 05:46:44.231424] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.191 [2024-12-07 05:46:44.231437] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.191 [2024-12-07 05:46:44.231445] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.191 [2024-12-07 05:46:44.231451] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.191 [2024-12-07 05:46:44.231465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.191 qpair failed and we were unable to recover it. 00:31:41.191 [2024-12-07 05:46:44.241393] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.191 [2024-12-07 05:46:44.241450] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.191 [2024-12-07 05:46:44.241464] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.191 [2024-12-07 05:46:44.241471] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.191 [2024-12-07 05:46:44.241478] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.191 [2024-12-07 05:46:44.241492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.191 qpair failed and we were unable to recover it. 00:31:41.191 [2024-12-07 05:46:44.251321] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.191 [2024-12-07 05:46:44.251381] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.191 [2024-12-07 05:46:44.251395] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.191 [2024-12-07 05:46:44.251402] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.191 [2024-12-07 05:46:44.251409] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.191 [2024-12-07 05:46:44.251422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.191 qpair failed and we were unable to recover it. 00:31:41.191 [2024-12-07 05:46:44.261351] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.191 [2024-12-07 05:46:44.261413] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.191 [2024-12-07 05:46:44.261430] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.191 [2024-12-07 05:46:44.261437] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.191 [2024-12-07 05:46:44.261444] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.191 [2024-12-07 05:46:44.261457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.191 qpair failed and we were unable to recover it. 00:31:41.191 [2024-12-07 05:46:44.271487] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.191 [2024-12-07 05:46:44.271544] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.191 [2024-12-07 05:46:44.271559] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.191 [2024-12-07 05:46:44.271566] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.191 [2024-12-07 05:46:44.271572] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.191 [2024-12-07 05:46:44.271586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.191 qpair failed and we were unable to recover it. 00:31:41.191 [2024-12-07 05:46:44.281509] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.191 [2024-12-07 05:46:44.281560] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.191 [2024-12-07 05:46:44.281574] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.191 [2024-12-07 05:46:44.281581] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.191 [2024-12-07 05:46:44.281588] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.191 [2024-12-07 05:46:44.281601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.191 qpair failed and we were unable to recover it. 00:31:41.191 [2024-12-07 05:46:44.291547] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.191 [2024-12-07 05:46:44.291603] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.191 [2024-12-07 05:46:44.291617] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.191 [2024-12-07 05:46:44.291625] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.191 [2024-12-07 05:46:44.291631] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.191 [2024-12-07 05:46:44.291644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.191 qpair failed and we were unable to recover it. 00:31:41.191 [2024-12-07 05:46:44.301573] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.191 [2024-12-07 05:46:44.301635] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.191 [2024-12-07 05:46:44.301650] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.191 [2024-12-07 05:46:44.301657] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.191 [2024-12-07 05:46:44.301663] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.191 [2024-12-07 05:46:44.301683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.191 qpair failed and we were unable to recover it. 00:31:41.191 [2024-12-07 05:46:44.311489] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.192 [2024-12-07 05:46:44.311549] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.192 [2024-12-07 05:46:44.311564] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.192 [2024-12-07 05:46:44.311571] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.192 [2024-12-07 05:46:44.311578] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.192 [2024-12-07 05:46:44.311591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.192 qpair failed and we were unable to recover it. 00:31:41.192 [2024-12-07 05:46:44.321581] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.192 [2024-12-07 05:46:44.321666] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.192 [2024-12-07 05:46:44.321680] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.192 [2024-12-07 05:46:44.321687] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.192 [2024-12-07 05:46:44.321693] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.192 [2024-12-07 05:46:44.321707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.192 qpair failed and we were unable to recover it. 00:31:41.192 [2024-12-07 05:46:44.331613] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.192 [2024-12-07 05:46:44.331668] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.192 [2024-12-07 05:46:44.331682] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.192 [2024-12-07 05:46:44.331690] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.192 [2024-12-07 05:46:44.331696] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.192 [2024-12-07 05:46:44.331709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.192 qpair failed and we were unable to recover it. 00:31:41.192 [2024-12-07 05:46:44.341683] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.192 [2024-12-07 05:46:44.341775] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.192 [2024-12-07 05:46:44.341790] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.192 [2024-12-07 05:46:44.341798] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.192 [2024-12-07 05:46:44.341804] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.192 [2024-12-07 05:46:44.341817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.192 qpair failed and we were unable to recover it. 00:31:41.192 [2024-12-07 05:46:44.351720] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.192 [2024-12-07 05:46:44.351786] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.192 [2024-12-07 05:46:44.351817] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.192 [2024-12-07 05:46:44.351826] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.192 [2024-12-07 05:46:44.351834] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.192 [2024-12-07 05:46:44.351853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.192 qpair failed and we were unable to recover it. 00:31:41.192 [2024-12-07 05:46:44.361745] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.192 [2024-12-07 05:46:44.361802] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.192 [2024-12-07 05:46:44.361828] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.192 [2024-12-07 05:46:44.361837] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.192 [2024-12-07 05:46:44.361845] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.192 [2024-12-07 05:46:44.361863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.192 qpair failed and we were unable to recover it. 00:31:41.192 [2024-12-07 05:46:44.371763] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.192 [2024-12-07 05:46:44.371856] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.192 [2024-12-07 05:46:44.371883] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.192 [2024-12-07 05:46:44.371892] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.192 [2024-12-07 05:46:44.371899] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.192 [2024-12-07 05:46:44.371917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.192 qpair failed and we were unable to recover it. 00:31:41.192 [2024-12-07 05:46:44.381813] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.192 [2024-12-07 05:46:44.381877] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.192 [2024-12-07 05:46:44.381894] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.192 [2024-12-07 05:46:44.381901] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.192 [2024-12-07 05:46:44.381908] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.192 [2024-12-07 05:46:44.381922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.192 qpair failed and we were unable to recover it. 00:31:41.192 [2024-12-07 05:46:44.391831] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.192 [2024-12-07 05:46:44.391892] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.192 [2024-12-07 05:46:44.391907] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.192 [2024-12-07 05:46:44.391914] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.192 [2024-12-07 05:46:44.391921] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.192 [2024-12-07 05:46:44.391939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.192 qpair failed and we were unable to recover it. 00:31:41.192 [2024-12-07 05:46:44.401859] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.192 [2024-12-07 05:46:44.401945] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.192 [2024-12-07 05:46:44.401959] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.192 [2024-12-07 05:46:44.401966] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.192 [2024-12-07 05:46:44.401973] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.192 [2024-12-07 05:46:44.401986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.192 qpair failed and we were unable to recover it. 00:31:41.192 [2024-12-07 05:46:44.411875] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.192 [2024-12-07 05:46:44.411928] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.192 [2024-12-07 05:46:44.411942] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.192 [2024-12-07 05:46:44.411949] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.192 [2024-12-07 05:46:44.411956] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.192 [2024-12-07 05:46:44.411969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.192 qpair failed and we were unable to recover it. 00:31:41.192 [2024-12-07 05:46:44.421924] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.192 [2024-12-07 05:46:44.421984] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.192 [2024-12-07 05:46:44.421999] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.192 [2024-12-07 05:46:44.422007] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.192 [2024-12-07 05:46:44.422020] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.192 [2024-12-07 05:46:44.422034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.192 qpair failed and we were unable to recover it. 00:31:41.455 [2024-12-07 05:46:44.431949] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.455 [2024-12-07 05:46:44.432002] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.455 [2024-12-07 05:46:44.432021] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.455 [2024-12-07 05:46:44.432029] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.455 [2024-12-07 05:46:44.432035] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.455 [2024-12-07 05:46:44.432049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.455 qpair failed and we were unable to recover it. 00:31:41.455 [2024-12-07 05:46:44.441973] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.455 [2024-12-07 05:46:44.442034] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.455 [2024-12-07 05:46:44.442053] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.455 [2024-12-07 05:46:44.442061] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.455 [2024-12-07 05:46:44.442068] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.455 [2024-12-07 05:46:44.442082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.455 qpair failed and we were unable to recover it. 00:31:41.455 [2024-12-07 05:46:44.452000] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.455 [2024-12-07 05:46:44.452063] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.455 [2024-12-07 05:46:44.452078] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.455 [2024-12-07 05:46:44.452085] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.455 [2024-12-07 05:46:44.452092] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.455 [2024-12-07 05:46:44.452105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.455 qpair failed and we were unable to recover it. 00:31:41.455 [2024-12-07 05:46:44.461909] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.455 [2024-12-07 05:46:44.461974] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.455 [2024-12-07 05:46:44.461988] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.455 [2024-12-07 05:46:44.461995] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.455 [2024-12-07 05:46:44.462002] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.455 [2024-12-07 05:46:44.462022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.455 qpair failed and we were unable to recover it. 00:31:41.455 [2024-12-07 05:46:44.471892] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.455 [2024-12-07 05:46:44.471946] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.455 [2024-12-07 05:46:44.471960] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.455 [2024-12-07 05:46:44.471967] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.455 [2024-12-07 05:46:44.471974] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.455 [2024-12-07 05:46:44.471987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.455 qpair failed and we were unable to recover it. 00:31:41.455 [2024-12-07 05:46:44.482083] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.455 [2024-12-07 05:46:44.482138] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.455 [2024-12-07 05:46:44.482153] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.455 [2024-12-07 05:46:44.482160] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.455 [2024-12-07 05:46:44.482167] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.455 [2024-12-07 05:46:44.482184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.455 qpair failed and we were unable to recover it. 00:31:41.455 [2024-12-07 05:46:44.492140] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.455 [2024-12-07 05:46:44.492211] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.455 [2024-12-07 05:46:44.492226] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.455 [2024-12-07 05:46:44.492233] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.455 [2024-12-07 05:46:44.492240] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.455 [2024-12-07 05:46:44.492253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.455 qpair failed and we were unable to recover it. 00:31:41.455 [2024-12-07 05:46:44.502080] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.455 [2024-12-07 05:46:44.502143] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.455 [2024-12-07 05:46:44.502157] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.455 [2024-12-07 05:46:44.502164] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.455 [2024-12-07 05:46:44.502171] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.455 [2024-12-07 05:46:44.502184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.455 qpair failed and we were unable to recover it. 00:31:41.455 [2024-12-07 05:46:44.512135] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.455 [2024-12-07 05:46:44.512184] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.455 [2024-12-07 05:46:44.512197] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.455 [2024-12-07 05:46:44.512204] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.455 [2024-12-07 05:46:44.512211] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.455 [2024-12-07 05:46:44.512224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.455 qpair failed and we were unable to recover it. 00:31:41.455 [2024-12-07 05:46:44.522177] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.455 [2024-12-07 05:46:44.522233] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.455 [2024-12-07 05:46:44.522248] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.455 [2024-12-07 05:46:44.522255] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.455 [2024-12-07 05:46:44.522262] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.455 [2024-12-07 05:46:44.522275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.455 qpair failed and we were unable to recover it. 00:31:41.455 [2024-12-07 05:46:44.532244] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.455 [2024-12-07 05:46:44.532296] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.456 [2024-12-07 05:46:44.532313] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.456 [2024-12-07 05:46:44.532321] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.456 [2024-12-07 05:46:44.532327] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.456 [2024-12-07 05:46:44.532340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.456 qpair failed and we were unable to recover it. 00:31:41.456 [2024-12-07 05:46:44.542260] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.456 [2024-12-07 05:46:44.542323] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.456 [2024-12-07 05:46:44.542337] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.456 [2024-12-07 05:46:44.542344] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.456 [2024-12-07 05:46:44.542351] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.456 [2024-12-07 05:46:44.542365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.456 qpair failed and we were unable to recover it. 00:31:41.456 [2024-12-07 05:46:44.552255] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.456 [2024-12-07 05:46:44.552326] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.456 [2024-12-07 05:46:44.552339] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.456 [2024-12-07 05:46:44.552347] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.456 [2024-12-07 05:46:44.552353] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.456 [2024-12-07 05:46:44.552367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.456 qpair failed and we were unable to recover it. 00:31:41.456 [2024-12-07 05:46:44.562347] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.456 [2024-12-07 05:46:44.562410] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.456 [2024-12-07 05:46:44.562423] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.456 [2024-12-07 05:46:44.562431] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.456 [2024-12-07 05:46:44.562437] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.456 [2024-12-07 05:46:44.562451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.456 qpair failed and we were unable to recover it. 00:31:41.456 [2024-12-07 05:46:44.572352] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.456 [2024-12-07 05:46:44.572399] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.456 [2024-12-07 05:46:44.572413] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.456 [2024-12-07 05:46:44.572420] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.456 [2024-12-07 05:46:44.572430] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.456 [2024-12-07 05:46:44.572444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.456 qpair failed and we were unable to recover it. 00:31:41.456 [2024-12-07 05:46:44.582367] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.456 [2024-12-07 05:46:44.582426] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.456 [2024-12-07 05:46:44.582440] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.456 [2024-12-07 05:46:44.582448] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.456 [2024-12-07 05:46:44.582454] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.456 [2024-12-07 05:46:44.582468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.456 qpair failed and we were unable to recover it. 00:31:41.456 [2024-12-07 05:46:44.592344] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.456 [2024-12-07 05:46:44.592400] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.456 [2024-12-07 05:46:44.592413] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.456 [2024-12-07 05:46:44.592421] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.456 [2024-12-07 05:46:44.592427] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.456 [2024-12-07 05:46:44.592440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.456 qpair failed and we were unable to recover it. 00:31:41.456 [2024-12-07 05:46:44.602312] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.456 [2024-12-07 05:46:44.602360] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.456 [2024-12-07 05:46:44.602374] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.456 [2024-12-07 05:46:44.602382] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.456 [2024-12-07 05:46:44.602388] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.456 [2024-12-07 05:46:44.602402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.456 qpair failed and we were unable to recover it. 00:31:41.456 [2024-12-07 05:46:44.612456] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.456 [2024-12-07 05:46:44.612510] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.456 [2024-12-07 05:46:44.612524] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.456 [2024-12-07 05:46:44.612531] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.456 [2024-12-07 05:46:44.612538] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.456 [2024-12-07 05:46:44.612551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.456 qpair failed and we were unable to recover it. 00:31:41.456 [2024-12-07 05:46:44.622468] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.456 [2024-12-07 05:46:44.622529] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.456 [2024-12-07 05:46:44.622543] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.456 [2024-12-07 05:46:44.622550] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.456 [2024-12-07 05:46:44.622556] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.456 [2024-12-07 05:46:44.622570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.456 qpair failed and we were unable to recover it. 00:31:41.456 [2024-12-07 05:46:44.632341] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.456 [2024-12-07 05:46:44.632397] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.456 [2024-12-07 05:46:44.632411] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.456 [2024-12-07 05:46:44.632419] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.456 [2024-12-07 05:46:44.632425] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.456 [2024-12-07 05:46:44.632439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.456 qpair failed and we were unable to recover it. 00:31:41.456 [2024-12-07 05:46:44.642562] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.456 [2024-12-07 05:46:44.642648] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.456 [2024-12-07 05:46:44.642662] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.456 [2024-12-07 05:46:44.642669] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.456 [2024-12-07 05:46:44.642676] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.456 [2024-12-07 05:46:44.642689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.456 qpair failed and we were unable to recover it. 00:31:41.456 [2024-12-07 05:46:44.652579] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.456 [2024-12-07 05:46:44.652640] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.456 [2024-12-07 05:46:44.652654] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.456 [2024-12-07 05:46:44.652661] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.456 [2024-12-07 05:46:44.652668] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.456 [2024-12-07 05:46:44.652681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.456 qpair failed and we were unable to recover it. 00:31:41.456 [2024-12-07 05:46:44.662620] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.456 [2024-12-07 05:46:44.662683] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.456 [2024-12-07 05:46:44.662696] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.456 [2024-12-07 05:46:44.662703] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.456 [2024-12-07 05:46:44.662714] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.457 [2024-12-07 05:46:44.662727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.457 qpair failed and we were unable to recover it. 00:31:41.457 [2024-12-07 05:46:44.672579] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.457 [2024-12-07 05:46:44.672630] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.457 [2024-12-07 05:46:44.672644] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.457 [2024-12-07 05:46:44.672651] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.457 [2024-12-07 05:46:44.672657] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.457 [2024-12-07 05:46:44.672670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.457 qpair failed and we were unable to recover it. 00:31:41.457 [2024-12-07 05:46:44.682647] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.457 [2024-12-07 05:46:44.682700] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.457 [2024-12-07 05:46:44.682716] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.457 [2024-12-07 05:46:44.682723] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.457 [2024-12-07 05:46:44.682730] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.457 [2024-12-07 05:46:44.682744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.457 qpair failed and we were unable to recover it. 00:31:41.720 [2024-12-07 05:46:44.692686] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.720 [2024-12-07 05:46:44.692752] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.720 [2024-12-07 05:46:44.692779] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.720 [2024-12-07 05:46:44.692788] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.720 [2024-12-07 05:46:44.692795] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.720 [2024-12-07 05:46:44.692813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.720 qpair failed and we were unable to recover it. 00:31:41.720 [2024-12-07 05:46:44.702738] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.720 [2024-12-07 05:46:44.702835] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.720 [2024-12-07 05:46:44.702862] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.720 [2024-12-07 05:46:44.702871] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.720 [2024-12-07 05:46:44.702878] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.720 [2024-12-07 05:46:44.702897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.720 qpair failed and we were unable to recover it. 00:31:41.721 [2024-12-07 05:46:44.712697] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.721 [2024-12-07 05:46:44.712752] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.721 [2024-12-07 05:46:44.712768] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.721 [2024-12-07 05:46:44.712775] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.721 [2024-12-07 05:46:44.712782] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.721 [2024-12-07 05:46:44.712796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.721 qpair failed and we were unable to recover it. 00:31:41.721 [2024-12-07 05:46:44.722775] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.721 [2024-12-07 05:46:44.722825] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.721 [2024-12-07 05:46:44.722840] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.721 [2024-12-07 05:46:44.722848] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.721 [2024-12-07 05:46:44.722855] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.721 [2024-12-07 05:46:44.722868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.721 qpair failed and we were unable to recover it. 00:31:41.721 [2024-12-07 05:46:44.732714] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.721 [2024-12-07 05:46:44.732769] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.721 [2024-12-07 05:46:44.732783] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.721 [2024-12-07 05:46:44.732791] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.721 [2024-12-07 05:46:44.732798] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.721 [2024-12-07 05:46:44.732811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.721 qpair failed and we were unable to recover it. 00:31:41.721 [2024-12-07 05:46:44.742777] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.721 [2024-12-07 05:46:44.742835] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.721 [2024-12-07 05:46:44.742851] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.721 [2024-12-07 05:46:44.742858] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.721 [2024-12-07 05:46:44.742865] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.721 [2024-12-07 05:46:44.742879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.721 qpair failed and we were unable to recover it. 00:31:41.721 [2024-12-07 05:46:44.752853] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.721 [2024-12-07 05:46:44.752927] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.721 [2024-12-07 05:46:44.752942] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.721 [2024-12-07 05:46:44.752949] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.721 [2024-12-07 05:46:44.752960] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.721 [2024-12-07 05:46:44.752974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.721 qpair failed and we were unable to recover it. 00:31:41.721 [2024-12-07 05:46:44.762878] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.721 [2024-12-07 05:46:44.762936] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.721 [2024-12-07 05:46:44.762951] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.721 [2024-12-07 05:46:44.762959] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.721 [2024-12-07 05:46:44.762966] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.721 [2024-12-07 05:46:44.762979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.721 qpair failed and we were unable to recover it. 00:31:41.721 [2024-12-07 05:46:44.772902] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.721 [2024-12-07 05:46:44.772953] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.721 [2024-12-07 05:46:44.772967] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.721 [2024-12-07 05:46:44.772974] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.721 [2024-12-07 05:46:44.772981] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.721 [2024-12-07 05:46:44.772995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.721 qpair failed and we were unable to recover it. 00:31:41.721 [2024-12-07 05:46:44.782959] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.721 [2024-12-07 05:46:44.783067] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.721 [2024-12-07 05:46:44.783082] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.721 [2024-12-07 05:46:44.783089] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.721 [2024-12-07 05:46:44.783096] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.721 [2024-12-07 05:46:44.783110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.721 qpair failed and we were unable to recover it. 00:31:41.721 [2024-12-07 05:46:44.792919] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.721 [2024-12-07 05:46:44.792968] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.721 [2024-12-07 05:46:44.792982] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.721 [2024-12-07 05:46:44.792990] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.721 [2024-12-07 05:46:44.792996] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.721 [2024-12-07 05:46:44.793014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.721 qpair failed and we were unable to recover it. 00:31:41.721 [2024-12-07 05:46:44.802982] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.721 [2024-12-07 05:46:44.803038] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.721 [2024-12-07 05:46:44.803053] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.721 [2024-12-07 05:46:44.803061] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.721 [2024-12-07 05:46:44.803067] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.721 [2024-12-07 05:46:44.803081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.721 qpair failed and we were unable to recover it. 00:31:41.721 [2024-12-07 05:46:44.813068] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.721 [2024-12-07 05:46:44.813120] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.721 [2024-12-07 05:46:44.813134] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.721 [2024-12-07 05:46:44.813141] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.721 [2024-12-07 05:46:44.813147] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.721 [2024-12-07 05:46:44.813161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.721 qpair failed and we were unable to recover it. 00:31:41.721 [2024-12-07 05:46:44.822964] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.721 [2024-12-07 05:46:44.823062] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.721 [2024-12-07 05:46:44.823076] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.721 [2024-12-07 05:46:44.823084] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.721 [2024-12-07 05:46:44.823090] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.721 [2024-12-07 05:46:44.823104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.721 qpair failed and we were unable to recover it. 00:31:41.721 [2024-12-07 05:46:44.833074] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.721 [2024-12-07 05:46:44.833128] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.721 [2024-12-07 05:46:44.833141] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.721 [2024-12-07 05:46:44.833149] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.721 [2024-12-07 05:46:44.833155] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.721 [2024-12-07 05:46:44.833169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.721 qpair failed and we were unable to recover it. 00:31:41.721 [2024-12-07 05:46:44.842975] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.722 [2024-12-07 05:46:44.843033] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.722 [2024-12-07 05:46:44.843048] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.722 [2024-12-07 05:46:44.843056] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.722 [2024-12-07 05:46:44.843065] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.722 [2024-12-07 05:46:44.843080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.722 qpair failed and we were unable to recover it. 00:31:41.722 [2024-12-07 05:46:44.853134] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.722 [2024-12-07 05:46:44.853187] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.722 [2024-12-07 05:46:44.853203] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.722 [2024-12-07 05:46:44.853210] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.722 [2024-12-07 05:46:44.853217] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.722 [2024-12-07 05:46:44.853235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.722 qpair failed and we were unable to recover it. 00:31:41.722 [2024-12-07 05:46:44.863174] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.722 [2024-12-07 05:46:44.863233] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.722 [2024-12-07 05:46:44.863248] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.722 [2024-12-07 05:46:44.863256] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.722 [2024-12-07 05:46:44.863262] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.722 [2024-12-07 05:46:44.863276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.722 qpair failed and we were unable to recover it. 00:31:41.722 [2024-12-07 05:46:44.873158] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.722 [2024-12-07 05:46:44.873208] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.722 [2024-12-07 05:46:44.873222] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.722 [2024-12-07 05:46:44.873229] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.722 [2024-12-07 05:46:44.873235] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.722 [2024-12-07 05:46:44.873248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.722 qpair failed and we were unable to recover it. 00:31:41.722 [2024-12-07 05:46:44.883133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.722 [2024-12-07 05:46:44.883184] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.722 [2024-12-07 05:46:44.883198] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.722 [2024-12-07 05:46:44.883206] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.722 [2024-12-07 05:46:44.883212] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.722 [2024-12-07 05:46:44.883225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.722 qpair failed and we were unable to recover it. 00:31:41.722 [2024-12-07 05:46:44.893283] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.722 [2024-12-07 05:46:44.893387] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.722 [2024-12-07 05:46:44.893402] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.722 [2024-12-07 05:46:44.893409] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.722 [2024-12-07 05:46:44.893416] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.722 [2024-12-07 05:46:44.893429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.722 qpair failed and we were unable to recover it. 00:31:41.722 [2024-12-07 05:46:44.903191] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.722 [2024-12-07 05:46:44.903252] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.722 [2024-12-07 05:46:44.903267] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.722 [2024-12-07 05:46:44.903274] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.722 [2024-12-07 05:46:44.903280] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.722 [2024-12-07 05:46:44.903294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.722 qpair failed and we were unable to recover it. 00:31:41.722 [2024-12-07 05:46:44.913245] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.722 [2024-12-07 05:46:44.913299] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.722 [2024-12-07 05:46:44.913313] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.722 [2024-12-07 05:46:44.913320] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.722 [2024-12-07 05:46:44.913326] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.722 [2024-12-07 05:46:44.913340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.722 qpair failed and we were unable to recover it. 00:31:41.722 [2024-12-07 05:46:44.923312] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.722 [2024-12-07 05:46:44.923357] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.722 [2024-12-07 05:46:44.923370] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.722 [2024-12-07 05:46:44.923378] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.722 [2024-12-07 05:46:44.923385] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.722 [2024-12-07 05:46:44.923398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.722 qpair failed and we were unable to recover it. 00:31:41.722 [2024-12-07 05:46:44.933392] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.722 [2024-12-07 05:46:44.933500] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.722 [2024-12-07 05:46:44.933514] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.722 [2024-12-07 05:46:44.933521] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.722 [2024-12-07 05:46:44.933531] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.722 [2024-12-07 05:46:44.933545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.722 qpair failed and we were unable to recover it. 00:31:41.722 [2024-12-07 05:46:44.943354] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.722 [2024-12-07 05:46:44.943413] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.722 [2024-12-07 05:46:44.943427] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.722 [2024-12-07 05:46:44.943435] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.722 [2024-12-07 05:46:44.943441] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.722 [2024-12-07 05:46:44.943455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.722 qpair failed and we were unable to recover it. 00:31:41.722 [2024-12-07 05:46:44.953407] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.722 [2024-12-07 05:46:44.953494] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.722 [2024-12-07 05:46:44.953508] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.722 [2024-12-07 05:46:44.953515] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.722 [2024-12-07 05:46:44.953522] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.722 [2024-12-07 05:46:44.953535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.722 qpair failed and we were unable to recover it. 00:31:41.986 [2024-12-07 05:46:44.963399] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.986 [2024-12-07 05:46:44.963448] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.986 [2024-12-07 05:46:44.963462] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.986 [2024-12-07 05:46:44.963469] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.986 [2024-12-07 05:46:44.963476] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.986 [2024-12-07 05:46:44.963489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.986 qpair failed and we were unable to recover it. 00:31:41.986 [2024-12-07 05:46:44.973461] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.986 [2024-12-07 05:46:44.973511] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.986 [2024-12-07 05:46:44.973525] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.986 [2024-12-07 05:46:44.973533] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.986 [2024-12-07 05:46:44.973539] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.986 [2024-12-07 05:46:44.973553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.986 qpair failed and we were unable to recover it. 00:31:41.986 [2024-12-07 05:46:44.983511] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.986 [2024-12-07 05:46:44.983585] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.986 [2024-12-07 05:46:44.983600] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.986 [2024-12-07 05:46:44.983608] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.986 [2024-12-07 05:46:44.983614] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.986 [2024-12-07 05:46:44.983627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.986 qpair failed and we were unable to recover it. 00:31:41.986 [2024-12-07 05:46:44.993443] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.986 [2024-12-07 05:46:44.993500] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.986 [2024-12-07 05:46:44.993515] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.986 [2024-12-07 05:46:44.993522] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.986 [2024-12-07 05:46:44.993528] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.986 [2024-12-07 05:46:44.993542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.986 qpair failed and we were unable to recover it. 00:31:41.986 [2024-12-07 05:46:45.003461] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.986 [2024-12-07 05:46:45.003509] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.986 [2024-12-07 05:46:45.003523] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.986 [2024-12-07 05:46:45.003530] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.986 [2024-12-07 05:46:45.003537] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.986 [2024-12-07 05:46:45.003550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.986 qpair failed and we were unable to recover it. 00:31:41.986 [2024-12-07 05:46:45.013563] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.986 [2024-12-07 05:46:45.013625] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.986 [2024-12-07 05:46:45.013640] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.986 [2024-12-07 05:46:45.013647] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.986 [2024-12-07 05:46:45.013654] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.986 [2024-12-07 05:46:45.013667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.986 qpair failed and we were unable to recover it. 00:31:41.986 [2024-12-07 05:46:45.023488] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.986 [2024-12-07 05:46:45.023550] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.986 [2024-12-07 05:46:45.023564] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.986 [2024-12-07 05:46:45.023575] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.986 [2024-12-07 05:46:45.023581] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.986 [2024-12-07 05:46:45.023595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.986 qpair failed and we were unable to recover it. 00:31:41.986 [2024-12-07 05:46:45.033480] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.986 [2024-12-07 05:46:45.033531] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.986 [2024-12-07 05:46:45.033546] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.986 [2024-12-07 05:46:45.033553] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.986 [2024-12-07 05:46:45.033560] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.986 [2024-12-07 05:46:45.033574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.986 qpair failed and we were unable to recover it. 00:31:41.986 [2024-12-07 05:46:45.043606] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.986 [2024-12-07 05:46:45.043681] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.986 [2024-12-07 05:46:45.043698] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.986 [2024-12-07 05:46:45.043705] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.986 [2024-12-07 05:46:45.043712] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.986 [2024-12-07 05:46:45.043726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.986 qpair failed and we were unable to recover it. 00:31:41.986 [2024-12-07 05:46:45.053689] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.986 [2024-12-07 05:46:45.053743] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.986 [2024-12-07 05:46:45.053757] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.986 [2024-12-07 05:46:45.053765] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.986 [2024-12-07 05:46:45.053771] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.986 [2024-12-07 05:46:45.053784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.986 qpair failed and we were unable to recover it. 00:31:41.986 [2024-12-07 05:46:45.063764] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.986 [2024-12-07 05:46:45.063833] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.986 [2024-12-07 05:46:45.063848] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.986 [2024-12-07 05:46:45.063855] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.986 [2024-12-07 05:46:45.063861] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.986 [2024-12-07 05:46:45.063875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.986 qpair failed and we were unable to recover it. 00:31:41.986 [2024-12-07 05:46:45.073708] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.986 [2024-12-07 05:46:45.073766] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.987 [2024-12-07 05:46:45.073781] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.987 [2024-12-07 05:46:45.073788] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.987 [2024-12-07 05:46:45.073794] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.987 [2024-12-07 05:46:45.073808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.987 qpair failed and we were unable to recover it. 00:31:41.987 [2024-12-07 05:46:45.083723] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.987 [2024-12-07 05:46:45.083765] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.987 [2024-12-07 05:46:45.083779] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.987 [2024-12-07 05:46:45.083787] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.987 [2024-12-07 05:46:45.083793] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.987 [2024-12-07 05:46:45.083807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.987 qpair failed and we were unable to recover it. 00:31:41.987 [2024-12-07 05:46:45.093813] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.987 [2024-12-07 05:46:45.093894] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.987 [2024-12-07 05:46:45.093910] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.987 [2024-12-07 05:46:45.093918] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.987 [2024-12-07 05:46:45.093924] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.987 [2024-12-07 05:46:45.093939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.987 qpair failed and we were unable to recover it. 00:31:41.987 [2024-12-07 05:46:45.103838] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.987 [2024-12-07 05:46:45.103901] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.987 [2024-12-07 05:46:45.103916] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.987 [2024-12-07 05:46:45.103923] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.987 [2024-12-07 05:46:45.103930] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.987 [2024-12-07 05:46:45.103943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.987 qpair failed and we were unable to recover it. 00:31:41.987 [2024-12-07 05:46:45.113812] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.987 [2024-12-07 05:46:45.113866] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.987 [2024-12-07 05:46:45.113879] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.987 [2024-12-07 05:46:45.113890] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.987 [2024-12-07 05:46:45.113897] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.987 [2024-12-07 05:46:45.113910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.987 qpair failed and we were unable to recover it. 00:31:41.987 [2024-12-07 05:46:45.123837] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.987 [2024-12-07 05:46:45.123888] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.987 [2024-12-07 05:46:45.123903] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.987 [2024-12-07 05:46:45.123910] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.987 [2024-12-07 05:46:45.123916] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.987 [2024-12-07 05:46:45.123930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.987 qpair failed and we were unable to recover it. 00:31:41.987 [2024-12-07 05:46:45.133912] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.987 [2024-12-07 05:46:45.133968] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.987 [2024-12-07 05:46:45.133983] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.987 [2024-12-07 05:46:45.133990] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.987 [2024-12-07 05:46:45.133997] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.987 [2024-12-07 05:46:45.134017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.987 qpair failed and we were unable to recover it. 00:31:41.987 [2024-12-07 05:46:45.143959] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.987 [2024-12-07 05:46:45.144029] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.987 [2024-12-07 05:46:45.144044] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.987 [2024-12-07 05:46:45.144051] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.987 [2024-12-07 05:46:45.144058] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.987 [2024-12-07 05:46:45.144071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.987 qpair failed and we were unable to recover it. 00:31:41.987 [2024-12-07 05:46:45.153924] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.987 [2024-12-07 05:46:45.153974] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.987 [2024-12-07 05:46:45.153987] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.987 [2024-12-07 05:46:45.153995] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.987 [2024-12-07 05:46:45.154001] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.987 [2024-12-07 05:46:45.154019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.987 qpair failed and we were unable to recover it. 00:31:41.987 [2024-12-07 05:46:45.163962] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.987 [2024-12-07 05:46:45.164014] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.987 [2024-12-07 05:46:45.164028] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.987 [2024-12-07 05:46:45.164035] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.987 [2024-12-07 05:46:45.164042] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.987 [2024-12-07 05:46:45.164056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.987 qpair failed and we were unable to recover it. 00:31:41.987 [2024-12-07 05:46:45.173993] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.987 [2024-12-07 05:46:45.174053] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.987 [2024-12-07 05:46:45.174068] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.987 [2024-12-07 05:46:45.174075] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.987 [2024-12-07 05:46:45.174082] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.987 [2024-12-07 05:46:45.174096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.987 qpair failed and we were unable to recover it. 00:31:41.987 [2024-12-07 05:46:45.184072] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.987 [2024-12-07 05:46:45.184150] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.987 [2024-12-07 05:46:45.184164] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.987 [2024-12-07 05:46:45.184171] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.987 [2024-12-07 05:46:45.184178] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.987 [2024-12-07 05:46:45.184192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.987 qpair failed and we were unable to recover it. 00:31:41.988 [2024-12-07 05:46:45.194054] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.988 [2024-12-07 05:46:45.194111] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.988 [2024-12-07 05:46:45.194125] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.988 [2024-12-07 05:46:45.194132] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.988 [2024-12-07 05:46:45.194139] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.988 [2024-12-07 05:46:45.194152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.988 qpair failed and we were unable to recover it. 00:31:41.988 [2024-12-07 05:46:45.204072] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.988 [2024-12-07 05:46:45.204129] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.988 [2024-12-07 05:46:45.204143] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.988 [2024-12-07 05:46:45.204154] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.988 [2024-12-07 05:46:45.204160] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.988 [2024-12-07 05:46:45.204174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.988 qpair failed and we were unable to recover it. 00:31:41.988 [2024-12-07 05:46:45.214121] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.988 [2024-12-07 05:46:45.214173] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.988 [2024-12-07 05:46:45.214187] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.988 [2024-12-07 05:46:45.214195] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.988 [2024-12-07 05:46:45.214201] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:41.988 [2024-12-07 05:46:45.214215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.988 qpair failed and we were unable to recover it. 00:31:42.250 [2024-12-07 05:46:45.224155] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.250 [2024-12-07 05:46:45.224216] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.250 [2024-12-07 05:46:45.224231] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.250 [2024-12-07 05:46:45.224238] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.250 [2024-12-07 05:46:45.224245] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.250 [2024-12-07 05:46:45.224258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.250 qpair failed and we were unable to recover it. 00:31:42.250 [2024-12-07 05:46:45.234146] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.250 [2024-12-07 05:46:45.234203] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.250 [2024-12-07 05:46:45.234219] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.250 [2024-12-07 05:46:45.234226] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.250 [2024-12-07 05:46:45.234233] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.250 [2024-12-07 05:46:45.234250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.250 qpair failed and we were unable to recover it. 00:31:42.250 [2024-12-07 05:46:45.244180] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.250 [2024-12-07 05:46:45.244229] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.250 [2024-12-07 05:46:45.244245] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.250 [2024-12-07 05:46:45.244253] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.250 [2024-12-07 05:46:45.244259] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.250 [2024-12-07 05:46:45.244273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.250 qpair failed and we were unable to recover it. 00:31:42.250 [2024-12-07 05:46:45.254296] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.250 [2024-12-07 05:46:45.254351] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.250 [2024-12-07 05:46:45.254366] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.250 [2024-12-07 05:46:45.254373] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.250 [2024-12-07 05:46:45.254380] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.250 [2024-12-07 05:46:45.254393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.250 qpair failed and we were unable to recover it. 00:31:42.250 [2024-12-07 05:46:45.264316] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.250 [2024-12-07 05:46:45.264392] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.250 [2024-12-07 05:46:45.264406] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.250 [2024-12-07 05:46:45.264413] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.250 [2024-12-07 05:46:45.264420] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.250 [2024-12-07 05:46:45.264433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.250 qpair failed and we were unable to recover it. 00:31:42.250 [2024-12-07 05:46:45.274271] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.250 [2024-12-07 05:46:45.274324] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.250 [2024-12-07 05:46:45.274338] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.250 [2024-12-07 05:46:45.274345] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.250 [2024-12-07 05:46:45.274352] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.250 [2024-12-07 05:46:45.274365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.250 qpair failed and we were unable to recover it. 00:31:42.250 [2024-12-07 05:46:45.284308] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.250 [2024-12-07 05:46:45.284359] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.250 [2024-12-07 05:46:45.284375] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.250 [2024-12-07 05:46:45.284382] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.250 [2024-12-07 05:46:45.284389] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.250 [2024-12-07 05:46:45.284405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.250 qpair failed and we were unable to recover it. 00:31:42.250 [2024-12-07 05:46:45.294371] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.250 [2024-12-07 05:46:45.294420] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.250 [2024-12-07 05:46:45.294435] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.250 [2024-12-07 05:46:45.294446] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.250 [2024-12-07 05:46:45.294452] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.250 [2024-12-07 05:46:45.294466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.250 qpair failed and we were unable to recover it. 00:31:42.250 [2024-12-07 05:46:45.304273] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.250 [2024-12-07 05:46:45.304336] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.250 [2024-12-07 05:46:45.304350] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.250 [2024-12-07 05:46:45.304357] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.250 [2024-12-07 05:46:45.304364] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.250 [2024-12-07 05:46:45.304377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.250 qpair failed and we were unable to recover it. 00:31:42.250 [2024-12-07 05:46:45.314375] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.250 [2024-12-07 05:46:45.314432] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.250 [2024-12-07 05:46:45.314446] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.250 [2024-12-07 05:46:45.314453] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.250 [2024-12-07 05:46:45.314460] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.250 [2024-12-07 05:46:45.314473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.250 qpair failed and we were unable to recover it. 00:31:42.250 [2024-12-07 05:46:45.324414] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.250 [2024-12-07 05:46:45.324458] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.250 [2024-12-07 05:46:45.324472] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.250 [2024-12-07 05:46:45.324480] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.250 [2024-12-07 05:46:45.324486] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.250 [2024-12-07 05:46:45.324500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.250 qpair failed and we were unable to recover it. 00:31:42.251 [2024-12-07 05:46:45.334400] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.251 [2024-12-07 05:46:45.334451] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.251 [2024-12-07 05:46:45.334465] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.251 [2024-12-07 05:46:45.334473] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.251 [2024-12-07 05:46:45.334480] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.251 [2024-12-07 05:46:45.334493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.251 qpair failed and we were unable to recover it. 00:31:42.251 [2024-12-07 05:46:45.344491] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.251 [2024-12-07 05:46:45.344587] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.251 [2024-12-07 05:46:45.344601] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.251 [2024-12-07 05:46:45.344608] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.251 [2024-12-07 05:46:45.344615] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.251 [2024-12-07 05:46:45.344628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.251 qpair failed and we were unable to recover it. 00:31:42.251 [2024-12-07 05:46:45.354514] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.251 [2024-12-07 05:46:45.354566] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.251 [2024-12-07 05:46:45.354579] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.251 [2024-12-07 05:46:45.354587] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.251 [2024-12-07 05:46:45.354594] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.251 [2024-12-07 05:46:45.354607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.251 qpair failed and we were unable to recover it. 00:31:42.251 [2024-12-07 05:46:45.364520] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.251 [2024-12-07 05:46:45.364573] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.251 [2024-12-07 05:46:45.364587] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.251 [2024-12-07 05:46:45.364595] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.251 [2024-12-07 05:46:45.364601] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.251 [2024-12-07 05:46:45.364615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.251 qpair failed and we were unable to recover it. 00:31:42.251 [2024-12-07 05:46:45.374578] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.251 [2024-12-07 05:46:45.374635] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.251 [2024-12-07 05:46:45.374650] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.251 [2024-12-07 05:46:45.374657] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.251 [2024-12-07 05:46:45.374664] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.251 [2024-12-07 05:46:45.374677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.251 qpair failed and we were unable to recover it. 00:31:42.251 [2024-12-07 05:46:45.384614] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.251 [2024-12-07 05:46:45.384674] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.251 [2024-12-07 05:46:45.384688] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.251 [2024-12-07 05:46:45.384698] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.251 [2024-12-07 05:46:45.384705] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.251 [2024-12-07 05:46:45.384718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.251 qpair failed and we were unable to recover it. 00:31:42.251 [2024-12-07 05:46:45.394628] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.251 [2024-12-07 05:46:45.394677] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.251 [2024-12-07 05:46:45.394690] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.251 [2024-12-07 05:46:45.394697] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.251 [2024-12-07 05:46:45.394704] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.251 [2024-12-07 05:46:45.394717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.251 qpair failed and we were unable to recover it. 00:31:42.251 [2024-12-07 05:46:45.404620] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.251 [2024-12-07 05:46:45.404671] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.251 [2024-12-07 05:46:45.404686] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.251 [2024-12-07 05:46:45.404693] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.251 [2024-12-07 05:46:45.404700] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.251 [2024-12-07 05:46:45.404713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.251 qpair failed and we were unable to recover it. 00:31:42.251 [2024-12-07 05:46:45.414619] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.251 [2024-12-07 05:46:45.414678] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.251 [2024-12-07 05:46:45.414692] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.251 [2024-12-07 05:46:45.414700] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.251 [2024-12-07 05:46:45.414706] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.251 [2024-12-07 05:46:45.414719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.251 qpair failed and we were unable to recover it. 00:31:42.251 [2024-12-07 05:46:45.424696] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.251 [2024-12-07 05:46:45.424765] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.251 [2024-12-07 05:46:45.424791] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.251 [2024-12-07 05:46:45.424800] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.251 [2024-12-07 05:46:45.424807] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.251 [2024-12-07 05:46:45.424826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.251 qpair failed and we were unable to recover it. 00:31:42.251 [2024-12-07 05:46:45.434720] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.251 [2024-12-07 05:46:45.434785] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.251 [2024-12-07 05:46:45.434811] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.251 [2024-12-07 05:46:45.434821] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.251 [2024-12-07 05:46:45.434828] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.251 [2024-12-07 05:46:45.434846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.251 qpair failed and we were unable to recover it. 00:31:42.251 [2024-12-07 05:46:45.444731] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.251 [2024-12-07 05:46:45.444794] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.251 [2024-12-07 05:46:45.444822] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.251 [2024-12-07 05:46:45.444832] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.251 [2024-12-07 05:46:45.444840] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.251 [2024-12-07 05:46:45.444858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.251 qpair failed and we were unable to recover it. 00:31:42.251 [2024-12-07 05:46:45.454803] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.251 [2024-12-07 05:46:45.454862] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.251 [2024-12-07 05:46:45.454879] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.251 [2024-12-07 05:46:45.454887] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.251 [2024-12-07 05:46:45.454894] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.251 [2024-12-07 05:46:45.454908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.251 qpair failed and we were unable to recover it. 00:31:42.251 [2024-12-07 05:46:45.464829] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.251 [2024-12-07 05:46:45.464892] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.251 [2024-12-07 05:46:45.464908] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.252 [2024-12-07 05:46:45.464915] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.252 [2024-12-07 05:46:45.464921] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.252 [2024-12-07 05:46:45.464935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.252 qpair failed and we were unable to recover it. 00:31:42.252 [2024-12-07 05:46:45.474784] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.252 [2024-12-07 05:46:45.474840] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.252 [2024-12-07 05:46:45.474855] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.252 [2024-12-07 05:46:45.474866] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.252 [2024-12-07 05:46:45.474873] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.252 [2024-12-07 05:46:45.474886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.252 qpair failed and we were unable to recover it. 00:31:42.252 [2024-12-07 05:46:45.484812] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.252 [2024-12-07 05:46:45.484860] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.252 [2024-12-07 05:46:45.484876] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.252 [2024-12-07 05:46:45.484883] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.252 [2024-12-07 05:46:45.484889] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.252 [2024-12-07 05:46:45.484903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.252 qpair failed and we were unable to recover it. 00:31:42.514 [2024-12-07 05:46:45.494902] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.514 [2024-12-07 05:46:45.494995] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.514 [2024-12-07 05:46:45.495014] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.514 [2024-12-07 05:46:45.495023] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.514 [2024-12-07 05:46:45.495029] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.514 [2024-12-07 05:46:45.495043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-12-07 05:46:45.504953] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.514 [2024-12-07 05:46:45.505024] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.514 [2024-12-07 05:46:45.505039] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.514 [2024-12-07 05:46:45.505046] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.514 [2024-12-07 05:46:45.505052] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.514 [2024-12-07 05:46:45.505066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-12-07 05:46:45.514932] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.514 [2024-12-07 05:46:45.514989] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.514 [2024-12-07 05:46:45.515003] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.514 [2024-12-07 05:46:45.515017] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.514 [2024-12-07 05:46:45.515025] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.514 [2024-12-07 05:46:45.515038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-12-07 05:46:45.524952] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.514 [2024-12-07 05:46:45.525007] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.514 [2024-12-07 05:46:45.525026] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.514 [2024-12-07 05:46:45.525034] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.514 [2024-12-07 05:46:45.525040] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.514 [2024-12-07 05:46:45.525055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-12-07 05:46:45.535061] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.514 [2024-12-07 05:46:45.535113] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.514 [2024-12-07 05:46:45.535128] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.514 [2024-12-07 05:46:45.535135] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.514 [2024-12-07 05:46:45.535142] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.514 [2024-12-07 05:46:45.535155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-12-07 05:46:45.545025] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.514 [2024-12-07 05:46:45.545119] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.514 [2024-12-07 05:46:45.545134] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.514 [2024-12-07 05:46:45.545141] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.514 [2024-12-07 05:46:45.545148] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.514 [2024-12-07 05:46:45.545162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-12-07 05:46:45.555036] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.514 [2024-12-07 05:46:45.555087] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.514 [2024-12-07 05:46:45.555100] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.514 [2024-12-07 05:46:45.555108] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.514 [2024-12-07 05:46:45.555114] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.514 [2024-12-07 05:46:45.555127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-12-07 05:46:45.564940] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.514 [2024-12-07 05:46:45.564989] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.514 [2024-12-07 05:46:45.565006] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.514 [2024-12-07 05:46:45.565020] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.514 [2024-12-07 05:46:45.565026] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.514 [2024-12-07 05:46:45.565041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-12-07 05:46:45.575126] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.514 [2024-12-07 05:46:45.575203] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.514 [2024-12-07 05:46:45.575217] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.514 [2024-12-07 05:46:45.575225] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.514 [2024-12-07 05:46:45.575231] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.514 [2024-12-07 05:46:45.575245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-12-07 05:46:45.585174] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.514 [2024-12-07 05:46:45.585234] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.514 [2024-12-07 05:46:45.585249] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.515 [2024-12-07 05:46:45.585256] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.515 [2024-12-07 05:46:45.585263] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.515 [2024-12-07 05:46:45.585276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-12-07 05:46:45.595147] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.515 [2024-12-07 05:46:45.595205] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.515 [2024-12-07 05:46:45.595220] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.515 [2024-12-07 05:46:45.595227] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.515 [2024-12-07 05:46:45.595234] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.515 [2024-12-07 05:46:45.595249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-12-07 05:46:45.605069] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.515 [2024-12-07 05:46:45.605124] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.515 [2024-12-07 05:46:45.605138] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.515 [2024-12-07 05:46:45.605146] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.515 [2024-12-07 05:46:45.605152] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.515 [2024-12-07 05:46:45.605166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-12-07 05:46:45.615152] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.515 [2024-12-07 05:46:45.615221] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.515 [2024-12-07 05:46:45.615236] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.515 [2024-12-07 05:46:45.615243] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.515 [2024-12-07 05:46:45.615249] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.515 [2024-12-07 05:46:45.615263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-12-07 05:46:45.625333] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.515 [2024-12-07 05:46:45.625442] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.515 [2024-12-07 05:46:45.625456] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.515 [2024-12-07 05:46:45.625464] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.515 [2024-12-07 05:46:45.625470] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.515 [2024-12-07 05:46:45.625483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-12-07 05:46:45.635276] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.515 [2024-12-07 05:46:45.635329] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.515 [2024-12-07 05:46:45.635342] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.515 [2024-12-07 05:46:45.635349] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.515 [2024-12-07 05:46:45.635356] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.515 [2024-12-07 05:46:45.635370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-12-07 05:46:45.645252] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.515 [2024-12-07 05:46:45.645309] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.515 [2024-12-07 05:46:45.645323] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.515 [2024-12-07 05:46:45.645330] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.515 [2024-12-07 05:46:45.645336] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.515 [2024-12-07 05:46:45.645350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-12-07 05:46:45.655223] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.515 [2024-12-07 05:46:45.655284] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.515 [2024-12-07 05:46:45.655302] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.515 [2024-12-07 05:46:45.655309] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.515 [2024-12-07 05:46:45.655316] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.515 [2024-12-07 05:46:45.655329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-12-07 05:46:45.665383] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.515 [2024-12-07 05:46:45.665445] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.515 [2024-12-07 05:46:45.665458] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.515 [2024-12-07 05:46:45.665465] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.515 [2024-12-07 05:46:45.665471] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.515 [2024-12-07 05:46:45.665485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-12-07 05:46:45.675333] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.515 [2024-12-07 05:46:45.675389] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.515 [2024-12-07 05:46:45.675403] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.515 [2024-12-07 05:46:45.675410] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.515 [2024-12-07 05:46:45.675416] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.515 [2024-12-07 05:46:45.675430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-12-07 05:46:45.685398] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.515 [2024-12-07 05:46:45.685448] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.515 [2024-12-07 05:46:45.685463] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.515 [2024-12-07 05:46:45.685470] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.515 [2024-12-07 05:46:45.685476] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.515 [2024-12-07 05:46:45.685490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-12-07 05:46:45.695450] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.515 [2024-12-07 05:46:45.695505] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.515 [2024-12-07 05:46:45.695520] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.515 [2024-12-07 05:46:45.695527] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.515 [2024-12-07 05:46:45.695534] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.515 [2024-12-07 05:46:45.695550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-12-07 05:46:45.705499] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.515 [2024-12-07 05:46:45.705559] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.515 [2024-12-07 05:46:45.705575] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.515 [2024-12-07 05:46:45.705582] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.515 [2024-12-07 05:46:45.705589] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.515 [2024-12-07 05:46:45.705602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-12-07 05:46:45.715481] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.515 [2024-12-07 05:46:45.715535] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.516 [2024-12-07 05:46:45.715550] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.516 [2024-12-07 05:46:45.715557] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.516 [2024-12-07 05:46:45.715564] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.516 [2024-12-07 05:46:45.715577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-12-07 05:46:45.725503] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.516 [2024-12-07 05:46:45.725554] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.516 [2024-12-07 05:46:45.725568] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.516 [2024-12-07 05:46:45.725576] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.516 [2024-12-07 05:46:45.725582] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.516 [2024-12-07 05:46:45.725596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-12-07 05:46:45.735567] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.516 [2024-12-07 05:46:45.735615] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.516 [2024-12-07 05:46:45.735630] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.516 [2024-12-07 05:46:45.735637] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.516 [2024-12-07 05:46:45.735643] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.516 [2024-12-07 05:46:45.735657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-12-07 05:46:45.745606] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.516 [2024-12-07 05:46:45.745666] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.516 [2024-12-07 05:46:45.745684] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.516 [2024-12-07 05:46:45.745692] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.516 [2024-12-07 05:46:45.745699] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.516 [2024-12-07 05:46:45.745712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.777 [2024-12-07 05:46:45.755579] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.777 [2024-12-07 05:46:45.755638] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.777 [2024-12-07 05:46:45.755651] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.777 [2024-12-07 05:46:45.755659] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.777 [2024-12-07 05:46:45.755665] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.777 [2024-12-07 05:46:45.755679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.777 qpair failed and we were unable to recover it. 00:31:42.777 [2024-12-07 05:46:45.765600] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.777 [2024-12-07 05:46:45.765643] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.777 [2024-12-07 05:46:45.765658] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.777 [2024-12-07 05:46:45.765665] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.777 [2024-12-07 05:46:45.765672] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.777 [2024-12-07 05:46:45.765685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.777 qpair failed and we were unable to recover it. 00:31:42.777 [2024-12-07 05:46:45.775673] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.777 [2024-12-07 05:46:45.775735] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.777 [2024-12-07 05:46:45.775749] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.777 [2024-12-07 05:46:45.775757] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.777 [2024-12-07 05:46:45.775763] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.777 [2024-12-07 05:46:45.775776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.777 qpair failed and we were unable to recover it. 00:31:42.777 [2024-12-07 05:46:45.785701] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.777 [2024-12-07 05:46:45.785767] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.777 [2024-12-07 05:46:45.785794] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.777 [2024-12-07 05:46:45.785804] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.777 [2024-12-07 05:46:45.785811] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.778 [2024-12-07 05:46:45.785835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.778 qpair failed and we were unable to recover it. 00:31:42.778 [2024-12-07 05:46:45.795705] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.778 [2024-12-07 05:46:45.795764] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.778 [2024-12-07 05:46:45.795790] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.778 [2024-12-07 05:46:45.795799] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.778 [2024-12-07 05:46:45.795807] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.778 [2024-12-07 05:46:45.795825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.778 qpair failed and we were unable to recover it. 00:31:42.778 [2024-12-07 05:46:45.805713] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.778 [2024-12-07 05:46:45.805765] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.778 [2024-12-07 05:46:45.805792] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.778 [2024-12-07 05:46:45.805800] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.778 [2024-12-07 05:46:45.805807] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.778 [2024-12-07 05:46:45.805826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.778 qpair failed and we were unable to recover it. 00:31:42.778 [2024-12-07 05:46:45.815672] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.778 [2024-12-07 05:46:45.815728] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.778 [2024-12-07 05:46:45.815745] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.778 [2024-12-07 05:46:45.815753] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.778 [2024-12-07 05:46:45.815760] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.778 [2024-12-07 05:46:45.815774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.778 qpair failed and we were unable to recover it. 00:31:42.778 [2024-12-07 05:46:45.825760] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.778 [2024-12-07 05:46:45.825825] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.778 [2024-12-07 05:46:45.825840] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.778 [2024-12-07 05:46:45.825847] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.778 [2024-12-07 05:46:45.825854] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.778 [2024-12-07 05:46:45.825868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.778 qpair failed and we were unable to recover it. 00:31:42.778 [2024-12-07 05:46:45.835828] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.778 [2024-12-07 05:46:45.835882] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.778 [2024-12-07 05:46:45.835905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.778 [2024-12-07 05:46:45.835913] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.778 [2024-12-07 05:46:45.835919] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.778 [2024-12-07 05:46:45.835933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.778 qpair failed and we were unable to recover it. 00:31:42.778 [2024-12-07 05:46:45.845880] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.778 [2024-12-07 05:46:45.845933] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.778 [2024-12-07 05:46:45.845960] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.778 [2024-12-07 05:46:45.845969] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.778 [2024-12-07 05:46:45.845976] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.778 [2024-12-07 05:46:45.845994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.778 qpair failed and we were unable to recover it. 00:31:42.778 [2024-12-07 05:46:45.855808] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.778 [2024-12-07 05:46:45.855871] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.778 [2024-12-07 05:46:45.855890] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.778 [2024-12-07 05:46:45.855898] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.778 [2024-12-07 05:46:45.855909] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.778 [2024-12-07 05:46:45.855924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.778 qpair failed and we were unable to recover it. 00:31:42.778 [2024-12-07 05:46:45.865957] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.778 [2024-12-07 05:46:45.866026] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.778 [2024-12-07 05:46:45.866042] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.778 [2024-12-07 05:46:45.866050] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.778 [2024-12-07 05:46:45.866057] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.778 [2024-12-07 05:46:45.866071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.778 qpair failed and we were unable to recover it. 00:31:42.778 [2024-12-07 05:46:45.875943] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.778 [2024-12-07 05:46:45.875995] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.778 [2024-12-07 05:46:45.876009] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.778 [2024-12-07 05:46:45.876023] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.778 [2024-12-07 05:46:45.876029] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.778 [2024-12-07 05:46:45.876048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.778 qpair failed and we were unable to recover it. 00:31:42.778 [2024-12-07 05:46:45.885971] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.778 [2024-12-07 05:46:45.886021] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.778 [2024-12-07 05:46:45.886037] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.778 [2024-12-07 05:46:45.886044] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.778 [2024-12-07 05:46:45.886051] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.778 [2024-12-07 05:46:45.886065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.778 qpair failed and we were unable to recover it. 00:31:42.778 [2024-12-07 05:46:45.895914] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.778 [2024-12-07 05:46:45.895970] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.778 [2024-12-07 05:46:45.895985] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.778 [2024-12-07 05:46:45.895993] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.778 [2024-12-07 05:46:45.895999] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.778 [2024-12-07 05:46:45.896017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.778 qpair failed and we were unable to recover it. 00:31:42.778 [2024-12-07 05:46:45.906074] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.778 [2024-12-07 05:46:45.906139] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.778 [2024-12-07 05:46:45.906153] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.778 [2024-12-07 05:46:45.906160] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.778 [2024-12-07 05:46:45.906167] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.778 [2024-12-07 05:46:45.906181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.778 qpair failed and we were unable to recover it. 00:31:42.778 [2024-12-07 05:46:45.916045] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.778 [2024-12-07 05:46:45.916096] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.778 [2024-12-07 05:46:45.916110] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.778 [2024-12-07 05:46:45.916117] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.778 [2024-12-07 05:46:45.916123] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.779 [2024-12-07 05:46:45.916137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.779 qpair failed and we were unable to recover it. 00:31:42.779 [2024-12-07 05:46:45.926080] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.779 [2024-12-07 05:46:45.926129] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.779 [2024-12-07 05:46:45.926147] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.779 [2024-12-07 05:46:45.926154] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.779 [2024-12-07 05:46:45.926161] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.779 [2024-12-07 05:46:45.926175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.779 qpair failed and we were unable to recover it. 00:31:42.779 [2024-12-07 05:46:45.936167] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.779 [2024-12-07 05:46:45.936219] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.779 [2024-12-07 05:46:45.936234] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.779 [2024-12-07 05:46:45.936241] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.779 [2024-12-07 05:46:45.936248] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.779 [2024-12-07 05:46:45.936261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.779 qpair failed and we were unable to recover it. 00:31:42.779 [2024-12-07 05:46:45.946177] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.779 [2024-12-07 05:46:45.946241] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.779 [2024-12-07 05:46:45.946256] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.779 [2024-12-07 05:46:45.946263] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.779 [2024-12-07 05:46:45.946270] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.779 [2024-12-07 05:46:45.946284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.779 qpair failed and we were unable to recover it. 00:31:42.779 [2024-12-07 05:46:45.956148] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.779 [2024-12-07 05:46:45.956211] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.779 [2024-12-07 05:46:45.956226] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.779 [2024-12-07 05:46:45.956234] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.779 [2024-12-07 05:46:45.956240] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.779 [2024-12-07 05:46:45.956254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.779 qpair failed and we were unable to recover it. 00:31:42.779 [2024-12-07 05:46:45.966117] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.779 [2024-12-07 05:46:45.966162] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.779 [2024-12-07 05:46:45.966177] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.779 [2024-12-07 05:46:45.966184] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.779 [2024-12-07 05:46:45.966191] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.779 [2024-12-07 05:46:45.966207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.779 qpair failed and we were unable to recover it. 00:31:42.779 [2024-12-07 05:46:45.976246] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.779 [2024-12-07 05:46:45.976302] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.779 [2024-12-07 05:46:45.976316] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.779 [2024-12-07 05:46:45.976324] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.779 [2024-12-07 05:46:45.976330] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.779 [2024-12-07 05:46:45.976344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.779 qpair failed and we were unable to recover it. 00:31:42.779 [2024-12-07 05:46:45.986300] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.779 [2024-12-07 05:46:45.986361] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.779 [2024-12-07 05:46:45.986375] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.779 [2024-12-07 05:46:45.986382] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.779 [2024-12-07 05:46:45.986388] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.779 [2024-12-07 05:46:45.986402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.779 qpair failed and we were unable to recover it. 00:31:42.779 [2024-12-07 05:46:45.996245] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.779 [2024-12-07 05:46:45.996298] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.779 [2024-12-07 05:46:45.996313] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.779 [2024-12-07 05:46:45.996320] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.779 [2024-12-07 05:46:45.996326] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.779 [2024-12-07 05:46:45.996340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.779 qpair failed and we were unable to recover it. 00:31:42.779 [2024-12-07 05:46:46.006330] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.779 [2024-12-07 05:46:46.006427] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.779 [2024-12-07 05:46:46.006441] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.779 [2024-12-07 05:46:46.006449] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.779 [2024-12-07 05:46:46.006456] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:42.779 [2024-12-07 05:46:46.006469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.779 qpair failed and we were unable to recover it. 00:31:43.041 [2024-12-07 05:46:46.016378] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.041 [2024-12-07 05:46:46.016435] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.041 [2024-12-07 05:46:46.016452] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.041 [2024-12-07 05:46:46.016460] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.041 [2024-12-07 05:46:46.016467] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.041 [2024-12-07 05:46:46.016480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.041 qpair failed and we were unable to recover it. 00:31:43.041 [2024-12-07 05:46:46.026390] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.041 [2024-12-07 05:46:46.026451] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.041 [2024-12-07 05:46:46.026465] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.041 [2024-12-07 05:46:46.026472] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.041 [2024-12-07 05:46:46.026478] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.041 [2024-12-07 05:46:46.026492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.041 qpair failed and we were unable to recover it. 00:31:43.041 [2024-12-07 05:46:46.036425] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.041 [2024-12-07 05:46:46.036506] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.041 [2024-12-07 05:46:46.036520] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.041 [2024-12-07 05:46:46.036527] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.041 [2024-12-07 05:46:46.036534] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.041 [2024-12-07 05:46:46.036547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.041 qpair failed and we were unable to recover it. 00:31:43.041 [2024-12-07 05:46:46.046392] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.041 [2024-12-07 05:46:46.046443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.041 [2024-12-07 05:46:46.046458] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.041 [2024-12-07 05:46:46.046465] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.041 [2024-12-07 05:46:46.046471] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.041 [2024-12-07 05:46:46.046486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.041 qpair failed and we were unable to recover it. 00:31:43.041 [2024-12-07 05:46:46.056474] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.041 [2024-12-07 05:46:46.056530] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.041 [2024-12-07 05:46:46.056544] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.041 [2024-12-07 05:46:46.056551] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.041 [2024-12-07 05:46:46.056558] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.041 [2024-12-07 05:46:46.056574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.041 qpair failed and we were unable to recover it. 00:31:43.041 [2024-12-07 05:46:46.066503] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.041 [2024-12-07 05:46:46.066562] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.041 [2024-12-07 05:46:46.066576] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.041 [2024-12-07 05:46:46.066583] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.041 [2024-12-07 05:46:46.066590] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.041 [2024-12-07 05:46:46.066603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.041 qpair failed and we were unable to recover it. 00:31:43.041 [2024-12-07 05:46:46.076490] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.041 [2024-12-07 05:46:46.076541] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.041 [2024-12-07 05:46:46.076554] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.041 [2024-12-07 05:46:46.076562] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.041 [2024-12-07 05:46:46.076568] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.041 [2024-12-07 05:46:46.076581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.041 qpair failed and we were unable to recover it. 00:31:43.041 [2024-12-07 05:46:46.086439] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.041 [2024-12-07 05:46:46.086489] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.041 [2024-12-07 05:46:46.086503] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.041 [2024-12-07 05:46:46.086511] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.041 [2024-12-07 05:46:46.086517] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.041 [2024-12-07 05:46:46.086531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.041 qpair failed and we were unable to recover it. 00:31:43.041 [2024-12-07 05:46:46.096586] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.041 [2024-12-07 05:46:46.096676] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.041 [2024-12-07 05:46:46.096692] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.041 [2024-12-07 05:46:46.096700] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.041 [2024-12-07 05:46:46.096706] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.041 [2024-12-07 05:46:46.096721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.041 qpair failed and we were unable to recover it. 00:31:43.041 [2024-12-07 05:46:46.106599] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.041 [2024-12-07 05:46:46.106660] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.041 [2024-12-07 05:46:46.106679] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.041 [2024-12-07 05:46:46.106686] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.041 [2024-12-07 05:46:46.106693] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.041 [2024-12-07 05:46:46.106707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.041 qpair failed and we were unable to recover it. 00:31:43.041 [2024-12-07 05:46:46.116485] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.041 [2024-12-07 05:46:46.116535] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.041 [2024-12-07 05:46:46.116550] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.041 [2024-12-07 05:46:46.116558] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.041 [2024-12-07 05:46:46.116564] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.041 [2024-12-07 05:46:46.116578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.041 qpair failed and we were unable to recover it. 00:31:43.041 [2024-12-07 05:46:46.126625] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.041 [2024-12-07 05:46:46.126678] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.041 [2024-12-07 05:46:46.126693] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.041 [2024-12-07 05:46:46.126700] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.041 [2024-12-07 05:46:46.126707] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.041 [2024-12-07 05:46:46.126720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.041 qpair failed and we were unable to recover it. 00:31:43.041 [2024-12-07 05:46:46.136683] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.041 [2024-12-07 05:46:46.136736] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.041 [2024-12-07 05:46:46.136751] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.041 [2024-12-07 05:46:46.136758] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.041 [2024-12-07 05:46:46.136764] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.041 [2024-12-07 05:46:46.136778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.042 qpair failed and we were unable to recover it. 00:31:43.042 [2024-12-07 05:46:46.146702] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.042 [2024-12-07 05:46:46.146761] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.042 [2024-12-07 05:46:46.146776] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.042 [2024-12-07 05:46:46.146783] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.042 [2024-12-07 05:46:46.146793] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.042 [2024-12-07 05:46:46.146806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.042 qpair failed and we were unable to recover it. 00:31:43.042 [2024-12-07 05:46:46.156716] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.042 [2024-12-07 05:46:46.156770] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.042 [2024-12-07 05:46:46.156787] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.042 [2024-12-07 05:46:46.156794] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.042 [2024-12-07 05:46:46.156801] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.042 [2024-12-07 05:46:46.156814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.042 qpair failed and we were unable to recover it. 00:31:43.042 [2024-12-07 05:46:46.166617] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.042 [2024-12-07 05:46:46.166670] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.042 [2024-12-07 05:46:46.166684] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.042 [2024-12-07 05:46:46.166691] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.042 [2024-12-07 05:46:46.166697] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.042 [2024-12-07 05:46:46.166711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.042 qpair failed and we were unable to recover it. 00:31:43.042 [2024-12-07 05:46:46.176817] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.042 [2024-12-07 05:46:46.176877] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.042 [2024-12-07 05:46:46.176892] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.042 [2024-12-07 05:46:46.176899] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.042 [2024-12-07 05:46:46.176905] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.042 [2024-12-07 05:46:46.176919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.042 qpair failed and we were unable to recover it. 00:31:43.042 [2024-12-07 05:46:46.186724] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.042 [2024-12-07 05:46:46.186784] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.042 [2024-12-07 05:46:46.186800] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.042 [2024-12-07 05:46:46.186807] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.042 [2024-12-07 05:46:46.186814] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.042 [2024-12-07 05:46:46.186828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.042 qpair failed and we were unable to recover it. 00:31:43.042 [2024-12-07 05:46:46.196816] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.042 [2024-12-07 05:46:46.196874] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.042 [2024-12-07 05:46:46.196893] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.042 [2024-12-07 05:46:46.196900] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.042 [2024-12-07 05:46:46.196907] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.042 [2024-12-07 05:46:46.196920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.042 qpair failed and we were unable to recover it. 00:31:43.042 [2024-12-07 05:46:46.206843] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.042 [2024-12-07 05:46:46.206889] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.042 [2024-12-07 05:46:46.206903] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.042 [2024-12-07 05:46:46.206911] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.042 [2024-12-07 05:46:46.206917] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.042 [2024-12-07 05:46:46.206931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.042 qpair failed and we were unable to recover it. 00:31:43.042 [2024-12-07 05:46:46.216845] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.042 [2024-12-07 05:46:46.216898] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.042 [2024-12-07 05:46:46.216913] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.042 [2024-12-07 05:46:46.216920] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.042 [2024-12-07 05:46:46.216926] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.042 [2024-12-07 05:46:46.216940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.042 qpair failed and we were unable to recover it. 00:31:43.042 [2024-12-07 05:46:46.226830] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.042 [2024-12-07 05:46:46.226890] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.042 [2024-12-07 05:46:46.226904] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.042 [2024-12-07 05:46:46.226911] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.042 [2024-12-07 05:46:46.226918] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.042 [2024-12-07 05:46:46.226931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.042 qpair failed and we were unable to recover it. 00:31:43.042 [2024-12-07 05:46:46.236955] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.042 [2024-12-07 05:46:46.237008] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.042 [2024-12-07 05:46:46.237028] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.042 [2024-12-07 05:46:46.237035] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.042 [2024-12-07 05:46:46.237045] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.042 [2024-12-07 05:46:46.237059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.042 qpair failed and we were unable to recover it. 00:31:43.042 [2024-12-07 05:46:46.246998] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.042 [2024-12-07 05:46:46.247053] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.042 [2024-12-07 05:46:46.247068] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.042 [2024-12-07 05:46:46.247075] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.042 [2024-12-07 05:46:46.247082] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.042 [2024-12-07 05:46:46.247096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.042 qpair failed and we were unable to recover it. 00:31:43.042 [2024-12-07 05:46:46.257019] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.042 [2024-12-07 05:46:46.257071] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.042 [2024-12-07 05:46:46.257085] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.042 [2024-12-07 05:46:46.257093] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.042 [2024-12-07 05:46:46.257099] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.042 [2024-12-07 05:46:46.257112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.042 qpair failed and we were unable to recover it. 00:31:43.042 [2024-12-07 05:46:46.267098] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.042 [2024-12-07 05:46:46.267207] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.042 [2024-12-07 05:46:46.267221] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.042 [2024-12-07 05:46:46.267229] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.042 [2024-12-07 05:46:46.267235] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.042 [2024-12-07 05:46:46.267249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.042 qpair failed and we were unable to recover it. 00:31:43.042 [2024-12-07 05:46:46.277030] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.042 [2024-12-07 05:46:46.277081] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.043 [2024-12-07 05:46:46.277094] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.043 [2024-12-07 05:46:46.277101] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.043 [2024-12-07 05:46:46.277108] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.043 [2024-12-07 05:46:46.277122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.043 qpair failed and we were unable to recover it. 00:31:43.308 [2024-12-07 05:46:46.287064] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.308 [2024-12-07 05:46:46.287123] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.308 [2024-12-07 05:46:46.287137] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.308 [2024-12-07 05:46:46.287145] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.308 [2024-12-07 05:46:46.287151] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.308 [2024-12-07 05:46:46.287165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.308 qpair failed and we were unable to recover it. 00:31:43.308 [2024-12-07 05:46:46.297120] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.308 [2024-12-07 05:46:46.297174] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.308 [2024-12-07 05:46:46.297189] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.308 [2024-12-07 05:46:46.297196] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.308 [2024-12-07 05:46:46.297203] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.308 [2024-12-07 05:46:46.297216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.308 qpair failed and we were unable to recover it. 00:31:43.308 [2024-12-07 05:46:46.307183] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.309 [2024-12-07 05:46:46.307279] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.309 [2024-12-07 05:46:46.307293] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.309 [2024-12-07 05:46:46.307300] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.309 [2024-12-07 05:46:46.307307] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.309 [2024-12-07 05:46:46.307320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.309 qpair failed and we were unable to recover it. 00:31:43.309 [2024-12-07 05:46:46.317179] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.309 [2024-12-07 05:46:46.317229] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.309 [2024-12-07 05:46:46.317242] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.309 [2024-12-07 05:46:46.317249] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.309 [2024-12-07 05:46:46.317256] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.309 [2024-12-07 05:46:46.317269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.309 qpair failed and we were unable to recover it. 00:31:43.309 [2024-12-07 05:46:46.327257] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.309 [2024-12-07 05:46:46.327333] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.309 [2024-12-07 05:46:46.327347] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.309 [2024-12-07 05:46:46.327354] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.309 [2024-12-07 05:46:46.327365] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.309 [2024-12-07 05:46:46.327379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.309 qpair failed and we were unable to recover it. 00:31:43.309 [2024-12-07 05:46:46.337289] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.309 [2024-12-07 05:46:46.337347] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.309 [2024-12-07 05:46:46.337360] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.309 [2024-12-07 05:46:46.337368] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.309 [2024-12-07 05:46:46.337374] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.309 [2024-12-07 05:46:46.337388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.309 qpair failed and we were unable to recover it. 00:31:43.309 [2024-12-07 05:46:46.347291] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.309 [2024-12-07 05:46:46.347351] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.309 [2024-12-07 05:46:46.347365] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.309 [2024-12-07 05:46:46.347372] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.309 [2024-12-07 05:46:46.347379] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.309 [2024-12-07 05:46:46.347392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.309 qpair failed and we were unable to recover it. 00:31:43.309 [2024-12-07 05:46:46.357246] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.309 [2024-12-07 05:46:46.357294] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.309 [2024-12-07 05:46:46.357308] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.309 [2024-12-07 05:46:46.357315] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.309 [2024-12-07 05:46:46.357322] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.309 [2024-12-07 05:46:46.357335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.309 qpair failed and we were unable to recover it. 00:31:43.309 [2024-12-07 05:46:46.367298] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.309 [2024-12-07 05:46:46.367345] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.309 [2024-12-07 05:46:46.367360] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.309 [2024-12-07 05:46:46.367367] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.309 [2024-12-07 05:46:46.367374] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.309 [2024-12-07 05:46:46.367387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.309 qpair failed and we were unable to recover it. 00:31:43.309 [2024-12-07 05:46:46.377363] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.309 [2024-12-07 05:46:46.377423] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.309 [2024-12-07 05:46:46.377438] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.309 [2024-12-07 05:46:46.377445] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.309 [2024-12-07 05:46:46.377452] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.309 [2024-12-07 05:46:46.377465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.309 qpair failed and we were unable to recover it. 00:31:43.309 [2024-12-07 05:46:46.387363] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.309 [2024-12-07 05:46:46.387423] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.309 [2024-12-07 05:46:46.387436] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.309 [2024-12-07 05:46:46.387444] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.309 [2024-12-07 05:46:46.387450] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.309 [2024-12-07 05:46:46.387464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.309 qpair failed and we were unable to recover it. 00:31:43.309 [2024-12-07 05:46:46.397340] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.309 [2024-12-07 05:46:46.397393] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.309 [2024-12-07 05:46:46.397407] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.309 [2024-12-07 05:46:46.397414] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.309 [2024-12-07 05:46:46.397420] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.309 [2024-12-07 05:46:46.397434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.309 qpair failed and we were unable to recover it. 00:31:43.309 [2024-12-07 05:46:46.407402] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.309 [2024-12-07 05:46:46.407451] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.309 [2024-12-07 05:46:46.407465] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.309 [2024-12-07 05:46:46.407473] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.309 [2024-12-07 05:46:46.407479] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.309 [2024-12-07 05:46:46.407492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.309 qpair failed and we were unable to recover it. 00:31:43.309 [2024-12-07 05:46:46.417334] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.309 [2024-12-07 05:46:46.417389] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.309 [2024-12-07 05:46:46.417403] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.309 [2024-12-07 05:46:46.417410] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.309 [2024-12-07 05:46:46.417420] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.309 [2024-12-07 05:46:46.417433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.309 qpair failed and we were unable to recover it. 00:31:43.309 [2024-12-07 05:46:46.427499] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.309 [2024-12-07 05:46:46.427560] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.309 [2024-12-07 05:46:46.427574] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.309 [2024-12-07 05:46:46.427582] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.309 [2024-12-07 05:46:46.427588] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.309 [2024-12-07 05:46:46.427602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.309 qpair failed and we were unable to recover it. 00:31:43.309 [2024-12-07 05:46:46.437516] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.309 [2024-12-07 05:46:46.437609] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.309 [2024-12-07 05:46:46.437623] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.309 [2024-12-07 05:46:46.437630] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.309 [2024-12-07 05:46:46.437637] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.309 [2024-12-07 05:46:46.437650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.309 qpair failed and we were unable to recover it. 00:31:43.309 [2024-12-07 05:46:46.447555] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.309 [2024-12-07 05:46:46.447624] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.309 [2024-12-07 05:46:46.447638] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.309 [2024-12-07 05:46:46.447646] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.309 [2024-12-07 05:46:46.447652] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.309 [2024-12-07 05:46:46.447666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.309 qpair failed and we were unable to recover it. 00:31:43.309 [2024-12-07 05:46:46.457545] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.309 [2024-12-07 05:46:46.457597] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.309 [2024-12-07 05:46:46.457612] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.309 [2024-12-07 05:46:46.457619] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.309 [2024-12-07 05:46:46.457626] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.309 [2024-12-07 05:46:46.457639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.309 qpair failed and we were unable to recover it. 00:31:43.309 [2024-12-07 05:46:46.467488] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.309 [2024-12-07 05:46:46.467552] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.309 [2024-12-07 05:46:46.467566] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.309 [2024-12-07 05:46:46.467573] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.310 [2024-12-07 05:46:46.467579] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.310 [2024-12-07 05:46:46.467593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.310 qpair failed and we were unable to recover it. 00:31:43.310 [2024-12-07 05:46:46.477585] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.310 [2024-12-07 05:46:46.477649] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.310 [2024-12-07 05:46:46.477663] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.310 [2024-12-07 05:46:46.477670] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.310 [2024-12-07 05:46:46.477677] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.310 [2024-12-07 05:46:46.477690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.310 qpair failed and we were unable to recover it. 00:31:43.310 [2024-12-07 05:46:46.487612] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.310 [2024-12-07 05:46:46.487678] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.310 [2024-12-07 05:46:46.487693] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.310 [2024-12-07 05:46:46.487701] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.310 [2024-12-07 05:46:46.487707] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.310 [2024-12-07 05:46:46.487724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.310 qpair failed and we were unable to recover it. 00:31:43.310 [2024-12-07 05:46:46.497599] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.310 [2024-12-07 05:46:46.497682] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.310 [2024-12-07 05:46:46.497697] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.310 [2024-12-07 05:46:46.497704] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.310 [2024-12-07 05:46:46.497711] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.310 [2024-12-07 05:46:46.497724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.310 qpair failed and we were unable to recover it. 00:31:43.310 [2024-12-07 05:46:46.507717] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.310 [2024-12-07 05:46:46.507776] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.310 [2024-12-07 05:46:46.507791] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.310 [2024-12-07 05:46:46.507798] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.310 [2024-12-07 05:46:46.507808] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.310 [2024-12-07 05:46:46.507821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.310 qpair failed and we were unable to recover it. 00:31:43.310 [2024-12-07 05:46:46.517674] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.310 [2024-12-07 05:46:46.517735] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.310 [2024-12-07 05:46:46.517762] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.310 [2024-12-07 05:46:46.517771] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.310 [2024-12-07 05:46:46.517778] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.310 [2024-12-07 05:46:46.517797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.310 qpair failed and we were unable to recover it. 00:31:43.310 [2024-12-07 05:46:46.527725] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.310 [2024-12-07 05:46:46.527789] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.310 [2024-12-07 05:46:46.527815] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.310 [2024-12-07 05:46:46.527824] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.310 [2024-12-07 05:46:46.527832] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.310 [2024-12-07 05:46:46.527850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.310 qpair failed and we were unable to recover it. 00:31:43.310 [2024-12-07 05:46:46.537784] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.310 [2024-12-07 05:46:46.537840] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.310 [2024-12-07 05:46:46.537867] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.310 [2024-12-07 05:46:46.537876] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.310 [2024-12-07 05:46:46.537883] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.310 [2024-12-07 05:46:46.537901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.310 qpair failed and we were unable to recover it. 00:31:43.571 [2024-12-07 05:46:46.547810] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.571 [2024-12-07 05:46:46.547875] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.571 [2024-12-07 05:46:46.547893] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.571 [2024-12-07 05:46:46.547900] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.571 [2024-12-07 05:46:46.547907] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.571 [2024-12-07 05:46:46.547922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.571 qpair failed and we were unable to recover it. 00:31:43.571 [2024-12-07 05:46:46.557784] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.571 [2024-12-07 05:46:46.557869] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.571 [2024-12-07 05:46:46.557885] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.571 [2024-12-07 05:46:46.557892] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.571 [2024-12-07 05:46:46.557899] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.571 [2024-12-07 05:46:46.557912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.571 qpair failed and we were unable to recover it. 00:31:43.571 [2024-12-07 05:46:46.567846] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.571 [2024-12-07 05:46:46.567898] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.571 [2024-12-07 05:46:46.567913] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.571 [2024-12-07 05:46:46.567920] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.571 [2024-12-07 05:46:46.567927] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.571 [2024-12-07 05:46:46.567940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.571 qpair failed and we were unable to recover it. 00:31:43.571 [2024-12-07 05:46:46.577905] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.571 [2024-12-07 05:46:46.577960] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.571 [2024-12-07 05:46:46.577974] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.571 [2024-12-07 05:46:46.577982] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.571 [2024-12-07 05:46:46.577988] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.571 [2024-12-07 05:46:46.578001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.571 qpair failed and we were unable to recover it. 00:31:43.571 [2024-12-07 05:46:46.587960] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.571 [2024-12-07 05:46:46.588026] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.571 [2024-12-07 05:46:46.588042] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.571 [2024-12-07 05:46:46.588050] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.572 [2024-12-07 05:46:46.588056] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.572 [2024-12-07 05:46:46.588070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.572 qpair failed and we were unable to recover it. 00:31:43.572 [2024-12-07 05:46:46.597937] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.572 [2024-12-07 05:46:46.597990] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.572 [2024-12-07 05:46:46.598004] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.572 [2024-12-07 05:46:46.598016] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.572 [2024-12-07 05:46:46.598028] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.572 [2024-12-07 05:46:46.598042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.572 qpair failed and we were unable to recover it. 00:31:43.572 [2024-12-07 05:46:46.607980] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.572 [2024-12-07 05:46:46.608035] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.572 [2024-12-07 05:46:46.608049] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.572 [2024-12-07 05:46:46.608057] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.572 [2024-12-07 05:46:46.608063] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.572 [2024-12-07 05:46:46.608077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.572 qpair failed and we were unable to recover it. 00:31:43.572 [2024-12-07 05:46:46.617958] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.572 [2024-12-07 05:46:46.618015] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.572 [2024-12-07 05:46:46.618030] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.572 [2024-12-07 05:46:46.618037] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.572 [2024-12-07 05:46:46.618044] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.572 [2024-12-07 05:46:46.618057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.572 qpair failed and we were unable to recover it. 00:31:43.572 [2024-12-07 05:46:46.628067] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.572 [2024-12-07 05:46:46.628129] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.572 [2024-12-07 05:46:46.628143] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.572 [2024-12-07 05:46:46.628150] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.572 [2024-12-07 05:46:46.628157] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.572 [2024-12-07 05:46:46.628171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.572 qpair failed and we were unable to recover it. 00:31:43.572 [2024-12-07 05:46:46.638045] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.572 [2024-12-07 05:46:46.638102] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.572 [2024-12-07 05:46:46.638116] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.572 [2024-12-07 05:46:46.638124] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.572 [2024-12-07 05:46:46.638130] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.572 [2024-12-07 05:46:46.638144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.572 qpair failed and we were unable to recover it. 00:31:43.572 [2024-12-07 05:46:46.648111] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.572 [2024-12-07 05:46:46.648170] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.572 [2024-12-07 05:46:46.648185] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.572 [2024-12-07 05:46:46.648192] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.572 [2024-12-07 05:46:46.648198] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.572 [2024-12-07 05:46:46.648212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.572 qpair failed and we were unable to recover it. 00:31:43.572 [2024-12-07 05:46:46.658093] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.572 [2024-12-07 05:46:46.658145] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.572 [2024-12-07 05:46:46.658159] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.572 [2024-12-07 05:46:46.658166] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.572 [2024-12-07 05:46:46.658173] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.572 [2024-12-07 05:46:46.658186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.572 qpair failed and we were unable to recover it. 00:31:43.572 [2024-12-07 05:46:46.668159] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.572 [2024-12-07 05:46:46.668221] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.572 [2024-12-07 05:46:46.668235] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.572 [2024-12-07 05:46:46.668242] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.572 [2024-12-07 05:46:46.668249] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.572 [2024-12-07 05:46:46.668263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.572 qpair failed and we were unable to recover it. 00:31:43.572 [2024-12-07 05:46:46.678154] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.572 [2024-12-07 05:46:46.678205] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.572 [2024-12-07 05:46:46.678219] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.572 [2024-12-07 05:46:46.678226] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.572 [2024-12-07 05:46:46.678233] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.572 [2024-12-07 05:46:46.678246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.572 qpair failed and we were unable to recover it. 00:31:43.572 [2024-12-07 05:46:46.688141] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.572 [2024-12-07 05:46:46.688187] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.572 [2024-12-07 05:46:46.688201] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.572 [2024-12-07 05:46:46.688215] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.572 [2024-12-07 05:46:46.688222] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.572 [2024-12-07 05:46:46.688236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.572 qpair failed and we were unable to recover it. 00:31:43.572 [2024-12-07 05:46:46.698189] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.572 [2024-12-07 05:46:46.698238] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.572 [2024-12-07 05:46:46.698253] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.572 [2024-12-07 05:46:46.698260] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.572 [2024-12-07 05:46:46.698266] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.572 [2024-12-07 05:46:46.698279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.572 qpair failed and we were unable to recover it. 00:31:43.572 [2024-12-07 05:46:46.708271] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.572 [2024-12-07 05:46:46.708345] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.572 [2024-12-07 05:46:46.708359] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.572 [2024-12-07 05:46:46.708366] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.572 [2024-12-07 05:46:46.708373] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.572 [2024-12-07 05:46:46.708386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.572 qpair failed and we were unable to recover it. 00:31:43.572 [2024-12-07 05:46:46.718248] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.572 [2024-12-07 05:46:46.718304] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.572 [2024-12-07 05:46:46.718318] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.572 [2024-12-07 05:46:46.718325] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.572 [2024-12-07 05:46:46.718331] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.572 [2024-12-07 05:46:46.718344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.572 qpair failed and we were unable to recover it. 00:31:43.572 [2024-12-07 05:46:46.728285] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.573 [2024-12-07 05:46:46.728334] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.573 [2024-12-07 05:46:46.728348] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.573 [2024-12-07 05:46:46.728355] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.573 [2024-12-07 05:46:46.728362] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.573 [2024-12-07 05:46:46.728375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.573 qpair failed and we were unable to recover it. 00:31:43.573 [2024-12-07 05:46:46.738293] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.573 [2024-12-07 05:46:46.738336] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.573 [2024-12-07 05:46:46.738351] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.573 [2024-12-07 05:46:46.738358] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.573 [2024-12-07 05:46:46.738365] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.573 [2024-12-07 05:46:46.738378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.573 qpair failed and we were unable to recover it. 00:31:43.573 [2024-12-07 05:46:46.748386] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.573 [2024-12-07 05:46:46.748444] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.573 [2024-12-07 05:46:46.748459] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.573 [2024-12-07 05:46:46.748466] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.573 [2024-12-07 05:46:46.748473] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.573 [2024-12-07 05:46:46.748487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.573 qpair failed and we were unable to recover it. 00:31:43.573 [2024-12-07 05:46:46.758369] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.573 [2024-12-07 05:46:46.758425] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.573 [2024-12-07 05:46:46.758439] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.573 [2024-12-07 05:46:46.758447] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.573 [2024-12-07 05:46:46.758453] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.573 [2024-12-07 05:46:46.758466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.573 qpair failed and we were unable to recover it. 00:31:43.573 [2024-12-07 05:46:46.768389] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.573 [2024-12-07 05:46:46.768445] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.573 [2024-12-07 05:46:46.768459] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.573 [2024-12-07 05:46:46.768466] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.573 [2024-12-07 05:46:46.768473] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.573 [2024-12-07 05:46:46.768486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.573 qpair failed and we were unable to recover it. 00:31:43.573 [2024-12-07 05:46:46.778388] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.573 [2024-12-07 05:46:46.778434] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.573 [2024-12-07 05:46:46.778448] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.573 [2024-12-07 05:46:46.778458] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.573 [2024-12-07 05:46:46.778465] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.573 [2024-12-07 05:46:46.778478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.573 qpair failed and we were unable to recover it. 00:31:43.573 [2024-12-07 05:46:46.788387] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.573 [2024-12-07 05:46:46.788448] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.573 [2024-12-07 05:46:46.788462] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.573 [2024-12-07 05:46:46.788469] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.573 [2024-12-07 05:46:46.788476] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.573 [2024-12-07 05:46:46.788490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.573 qpair failed and we were unable to recover it. 00:31:43.573 [2024-12-07 05:46:46.798477] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.573 [2024-12-07 05:46:46.798535] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.573 [2024-12-07 05:46:46.798549] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.573 [2024-12-07 05:46:46.798557] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.573 [2024-12-07 05:46:46.798563] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.573 [2024-12-07 05:46:46.798577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.573 qpair failed and we were unable to recover it. 00:31:43.835 [2024-12-07 05:46:46.808365] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.835 [2024-12-07 05:46:46.808420] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.835 [2024-12-07 05:46:46.808434] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.835 [2024-12-07 05:46:46.808442] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.835 [2024-12-07 05:46:46.808449] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.835 [2024-12-07 05:46:46.808463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.835 qpair failed and we were unable to recover it. 00:31:43.835 [2024-12-07 05:46:46.818521] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.835 [2024-12-07 05:46:46.818571] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.835 [2024-12-07 05:46:46.818585] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.835 [2024-12-07 05:46:46.818593] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.835 [2024-12-07 05:46:46.818599] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.835 [2024-12-07 05:46:46.818612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.835 qpair failed and we were unable to recover it. 00:31:43.835 [2024-12-07 05:46:46.828535] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.835 [2024-12-07 05:46:46.828594] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.835 [2024-12-07 05:46:46.828608] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.835 [2024-12-07 05:46:46.828615] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.836 [2024-12-07 05:46:46.828622] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.836 [2024-12-07 05:46:46.828635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.836 qpair failed and we were unable to recover it. 00:31:43.836 [2024-12-07 05:46:46.838569] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.836 [2024-12-07 05:46:46.838622] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.836 [2024-12-07 05:46:46.838635] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.836 [2024-12-07 05:46:46.838642] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.836 [2024-12-07 05:46:46.838649] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.836 [2024-12-07 05:46:46.838662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.836 qpair failed and we were unable to recover it. 00:31:43.836 [2024-12-07 05:46:46.848610] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.836 [2024-12-07 05:46:46.848657] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.836 [2024-12-07 05:46:46.848671] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.836 [2024-12-07 05:46:46.848679] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.836 [2024-12-07 05:46:46.848685] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.836 [2024-12-07 05:46:46.848699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.836 qpair failed and we were unable to recover it. 00:31:43.836 [2024-12-07 05:46:46.858547] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.836 [2024-12-07 05:46:46.858600] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.836 [2024-12-07 05:46:46.858614] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.836 [2024-12-07 05:46:46.858621] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.836 [2024-12-07 05:46:46.858628] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.836 [2024-12-07 05:46:46.858641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.836 qpair failed and we were unable to recover it. 00:31:43.836 [2024-12-07 05:46:46.868588] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.836 [2024-12-07 05:46:46.868650] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.836 [2024-12-07 05:46:46.868664] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.836 [2024-12-07 05:46:46.868675] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.836 [2024-12-07 05:46:46.868681] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.836 [2024-12-07 05:46:46.868694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.836 qpair failed and we were unable to recover it. 00:31:43.836 [2024-12-07 05:46:46.878582] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.836 [2024-12-07 05:46:46.878639] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.836 [2024-12-07 05:46:46.878656] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.836 [2024-12-07 05:46:46.878664] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.836 [2024-12-07 05:46:46.878670] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.836 [2024-12-07 05:46:46.878684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.836 qpair failed and we were unable to recover it. 00:31:43.836 [2024-12-07 05:46:46.888693] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.836 [2024-12-07 05:46:46.888742] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.836 [2024-12-07 05:46:46.888756] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.836 [2024-12-07 05:46:46.888763] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.836 [2024-12-07 05:46:46.888770] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.836 [2024-12-07 05:46:46.888783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.836 qpair failed and we were unable to recover it. 00:31:43.836 [2024-12-07 05:46:46.898757] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.836 [2024-12-07 05:46:46.898807] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.836 [2024-12-07 05:46:46.898822] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.836 [2024-12-07 05:46:46.898829] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.836 [2024-12-07 05:46:46.898835] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.836 [2024-12-07 05:46:46.898849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.836 qpair failed and we were unable to recover it. 00:31:43.836 [2024-12-07 05:46:46.908790] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.836 [2024-12-07 05:46:46.908862] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.836 [2024-12-07 05:46:46.908876] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.836 [2024-12-07 05:46:46.908884] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.836 [2024-12-07 05:46:46.908891] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.836 [2024-12-07 05:46:46.908904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.836 qpair failed and we were unable to recover it. 00:31:43.836 [2024-12-07 05:46:46.918795] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.836 [2024-12-07 05:46:46.918849] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.836 [2024-12-07 05:46:46.918862] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.836 [2024-12-07 05:46:46.918869] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.836 [2024-12-07 05:46:46.918876] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.836 [2024-12-07 05:46:46.918889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.836 qpair failed and we were unable to recover it. 00:31:43.836 [2024-12-07 05:46:46.928822] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.836 [2024-12-07 05:46:46.928872] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.836 [2024-12-07 05:46:46.928886] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.836 [2024-12-07 05:46:46.928893] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.836 [2024-12-07 05:46:46.928900] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.837 [2024-12-07 05:46:46.928914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.837 qpair failed and we were unable to recover it. 00:31:43.837 [2024-12-07 05:46:46.938845] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.837 [2024-12-07 05:46:46.938899] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.837 [2024-12-07 05:46:46.938913] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.837 [2024-12-07 05:46:46.938921] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.837 [2024-12-07 05:46:46.938927] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.837 [2024-12-07 05:46:46.938941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.837 qpair failed and we were unable to recover it. 00:31:43.837 [2024-12-07 05:46:46.948913] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.837 [2024-12-07 05:46:46.948973] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.837 [2024-12-07 05:46:46.948988] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.837 [2024-12-07 05:46:46.948995] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.837 [2024-12-07 05:46:46.949002] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.837 [2024-12-07 05:46:46.949020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.837 qpair failed and we were unable to recover it. 00:31:43.837 [2024-12-07 05:46:46.958924] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.837 [2024-12-07 05:46:46.958976] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.837 [2024-12-07 05:46:46.958990] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.837 [2024-12-07 05:46:46.959000] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.837 [2024-12-07 05:46:46.959007] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.837 [2024-12-07 05:46:46.959026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.837 qpair failed and we were unable to recover it. 00:31:43.837 [2024-12-07 05:46:46.968869] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.837 [2024-12-07 05:46:46.968917] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.837 [2024-12-07 05:46:46.968931] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.837 [2024-12-07 05:46:46.968939] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.837 [2024-12-07 05:46:46.968945] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.837 [2024-12-07 05:46:46.968958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.837 qpair failed and we were unable to recover it. 00:31:43.837 [2024-12-07 05:46:46.978931] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.837 [2024-12-07 05:46:46.978977] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.837 [2024-12-07 05:46:46.978991] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.837 [2024-12-07 05:46:46.978999] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.837 [2024-12-07 05:46:46.979005] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.837 [2024-12-07 05:46:46.979024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.837 qpair failed and we were unable to recover it. 00:31:43.837 [2024-12-07 05:46:46.989053] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.837 [2024-12-07 05:46:46.989137] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.837 [2024-12-07 05:46:46.989151] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.837 [2024-12-07 05:46:46.989158] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.837 [2024-12-07 05:46:46.989165] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.837 [2024-12-07 05:46:46.989178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.837 qpair failed and we were unable to recover it. 00:31:43.837 [2024-12-07 05:46:46.998936] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.837 [2024-12-07 05:46:46.998990] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.837 [2024-12-07 05:46:46.999004] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.837 [2024-12-07 05:46:46.999017] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.837 [2024-12-07 05:46:46.999024] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.837 [2024-12-07 05:46:46.999037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.837 qpair failed and we were unable to recover it. 00:31:43.837 [2024-12-07 05:46:47.009053] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.837 [2024-12-07 05:46:47.009097] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.837 [2024-12-07 05:46:47.009111] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.837 [2024-12-07 05:46:47.009119] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.837 [2024-12-07 05:46:47.009125] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.837 [2024-12-07 05:46:47.009139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.837 qpair failed and we were unable to recover it. 00:31:43.837 [2024-12-07 05:46:47.019089] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.837 [2024-12-07 05:46:47.019136] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.837 [2024-12-07 05:46:47.019150] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.837 [2024-12-07 05:46:47.019157] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.837 [2024-12-07 05:46:47.019164] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.837 [2024-12-07 05:46:47.019177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.837 qpair failed and we were unable to recover it. 00:31:43.837 [2024-12-07 05:46:47.029117] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.837 [2024-12-07 05:46:47.029178] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.837 [2024-12-07 05:46:47.029193] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.837 [2024-12-07 05:46:47.029200] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.837 [2024-12-07 05:46:47.029207] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.837 [2024-12-07 05:46:47.029224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.837 qpair failed and we were unable to recover it. 00:31:43.838 [2024-12-07 05:46:47.039112] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.838 [2024-12-07 05:46:47.039167] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.838 [2024-12-07 05:46:47.039182] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.838 [2024-12-07 05:46:47.039189] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.838 [2024-12-07 05:46:47.039196] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.838 [2024-12-07 05:46:47.039209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.838 qpair failed and we were unable to recover it. 00:31:43.838 [2024-12-07 05:46:47.049079] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.838 [2024-12-07 05:46:47.049133] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.838 [2024-12-07 05:46:47.049148] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.838 [2024-12-07 05:46:47.049159] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.838 [2024-12-07 05:46:47.049165] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.838 [2024-12-07 05:46:47.049179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.838 qpair failed and we were unable to recover it. 00:31:43.838 [2024-12-07 05:46:47.059190] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.838 [2024-12-07 05:46:47.059243] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.838 [2024-12-07 05:46:47.059258] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.838 [2024-12-07 05:46:47.059265] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.838 [2024-12-07 05:46:47.059271] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.838 [2024-12-07 05:46:47.059285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.838 qpair failed and we were unable to recover it. 00:31:43.838 [2024-12-07 05:46:47.069264] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:43.838 [2024-12-07 05:46:47.069329] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:43.838 [2024-12-07 05:46:47.069342] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:43.838 [2024-12-07 05:46:47.069350] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:43.838 [2024-12-07 05:46:47.069356] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:43.838 [2024-12-07 05:46:47.069370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.838 qpair failed and we were unable to recover it. 00:31:44.100 [2024-12-07 05:46:47.079241] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.100 [2024-12-07 05:46:47.079331] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.100 [2024-12-07 05:46:47.079346] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.100 [2024-12-07 05:46:47.079353] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.100 [2024-12-07 05:46:47.079360] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.100 [2024-12-07 05:46:47.079373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.100 qpair failed and we were unable to recover it. 00:31:44.100 [2024-12-07 05:46:47.089240] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.100 [2024-12-07 05:46:47.089282] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.100 [2024-12-07 05:46:47.089296] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.100 [2024-12-07 05:46:47.089303] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.100 [2024-12-07 05:46:47.089310] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.100 [2024-12-07 05:46:47.089324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.100 qpair failed and we were unable to recover it. 00:31:44.100 [2024-12-07 05:46:47.099301] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.100 [2024-12-07 05:46:47.099354] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.100 [2024-12-07 05:46:47.099372] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.100 [2024-12-07 05:46:47.099380] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.100 [2024-12-07 05:46:47.099387] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.100 [2024-12-07 05:46:47.099402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.100 qpair failed and we were unable to recover it. 00:31:44.100 [2024-12-07 05:46:47.109368] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.100 [2024-12-07 05:46:47.109429] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.100 [2024-12-07 05:46:47.109444] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.100 [2024-12-07 05:46:47.109451] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.100 [2024-12-07 05:46:47.109458] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.100 [2024-12-07 05:46:47.109472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.100 qpair failed and we were unable to recover it. 00:31:44.100 [2024-12-07 05:46:47.119346] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.100 [2024-12-07 05:46:47.119398] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.100 [2024-12-07 05:46:47.119411] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.100 [2024-12-07 05:46:47.119419] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.100 [2024-12-07 05:46:47.119425] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.100 [2024-12-07 05:46:47.119438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.100 qpair failed and we were unable to recover it. 00:31:44.100 [2024-12-07 05:46:47.129389] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.100 [2024-12-07 05:46:47.129448] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.100 [2024-12-07 05:46:47.129461] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.101 [2024-12-07 05:46:47.129468] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.101 [2024-12-07 05:46:47.129475] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.101 [2024-12-07 05:46:47.129489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.101 qpair failed and we were unable to recover it. 00:31:44.101 [2024-12-07 05:46:47.139408] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.101 [2024-12-07 05:46:47.139458] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.101 [2024-12-07 05:46:47.139475] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.101 [2024-12-07 05:46:47.139483] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.101 [2024-12-07 05:46:47.139489] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.101 [2024-12-07 05:46:47.139503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.101 qpair failed and we were unable to recover it. 00:31:44.101 [2024-12-07 05:46:47.149451] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.101 [2024-12-07 05:46:47.149512] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.101 [2024-12-07 05:46:47.149527] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.101 [2024-12-07 05:46:47.149534] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.101 [2024-12-07 05:46:47.149540] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.101 [2024-12-07 05:46:47.149554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.101 qpair failed and we were unable to recover it. 00:31:44.101 [2024-12-07 05:46:47.159481] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.101 [2024-12-07 05:46:47.159556] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.101 [2024-12-07 05:46:47.159570] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.101 [2024-12-07 05:46:47.159577] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.101 [2024-12-07 05:46:47.159583] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.101 [2024-12-07 05:46:47.159596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.101 qpair failed and we were unable to recover it. 00:31:44.101 [2024-12-07 05:46:47.169489] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.101 [2024-12-07 05:46:47.169537] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.101 [2024-12-07 05:46:47.169550] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.101 [2024-12-07 05:46:47.169557] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.101 [2024-12-07 05:46:47.169564] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.101 [2024-12-07 05:46:47.169577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.101 qpair failed and we were unable to recover it. 00:31:44.101 [2024-12-07 05:46:47.179508] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.101 [2024-12-07 05:46:47.179557] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.101 [2024-12-07 05:46:47.179573] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.101 [2024-12-07 05:46:47.179580] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.101 [2024-12-07 05:46:47.179587] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.101 [2024-12-07 05:46:47.179600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.101 qpair failed and we were unable to recover it. 00:31:44.101 [2024-12-07 05:46:47.189624] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.101 [2024-12-07 05:46:47.189684] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.101 [2024-12-07 05:46:47.189698] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.101 [2024-12-07 05:46:47.189705] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.101 [2024-12-07 05:46:47.189712] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.101 [2024-12-07 05:46:47.189725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.101 qpair failed and we were unable to recover it. 00:31:44.101 [2024-12-07 05:46:47.199578] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.101 [2024-12-07 05:46:47.199632] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.101 [2024-12-07 05:46:47.199648] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.101 [2024-12-07 05:46:47.199655] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.101 [2024-12-07 05:46:47.199662] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.101 [2024-12-07 05:46:47.199675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.101 qpair failed and we were unable to recover it. 00:31:44.101 [2024-12-07 05:46:47.209599] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.101 [2024-12-07 05:46:47.209652] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.101 [2024-12-07 05:46:47.209666] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.101 [2024-12-07 05:46:47.209673] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.101 [2024-12-07 05:46:47.209680] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.101 [2024-12-07 05:46:47.209693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.101 qpair failed and we were unable to recover it. 00:31:44.101 [2024-12-07 05:46:47.219642] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.101 [2024-12-07 05:46:47.219692] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.101 [2024-12-07 05:46:47.219706] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.101 [2024-12-07 05:46:47.219714] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.101 [2024-12-07 05:46:47.219720] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.101 [2024-12-07 05:46:47.219734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.101 qpair failed and we were unable to recover it. 00:31:44.101 [2024-12-07 05:46:47.229664] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.101 [2024-12-07 05:46:47.229732] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.101 [2024-12-07 05:46:47.229762] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.101 [2024-12-07 05:46:47.229771] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.102 [2024-12-07 05:46:47.229778] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.102 [2024-12-07 05:46:47.229797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.102 qpair failed and we were unable to recover it. 00:31:44.102 [2024-12-07 05:46:47.239693] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.102 [2024-12-07 05:46:47.239755] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.102 [2024-12-07 05:46:47.239781] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.102 [2024-12-07 05:46:47.239790] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.102 [2024-12-07 05:46:47.239797] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.102 [2024-12-07 05:46:47.239816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.102 qpair failed and we were unable to recover it. 00:31:44.102 [2024-12-07 05:46:47.249708] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.102 [2024-12-07 05:46:47.249768] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.102 [2024-12-07 05:46:47.249794] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.102 [2024-12-07 05:46:47.249803] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.102 [2024-12-07 05:46:47.249810] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.102 [2024-12-07 05:46:47.249829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.102 qpair failed and we were unable to recover it. 00:31:44.102 [2024-12-07 05:46:47.259736] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.102 [2024-12-07 05:46:47.259817] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.102 [2024-12-07 05:46:47.259834] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.102 [2024-12-07 05:46:47.259842] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.102 [2024-12-07 05:46:47.259848] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.102 [2024-12-07 05:46:47.259863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.102 qpair failed and we were unable to recover it. 00:31:44.102 [2024-12-07 05:46:47.269798] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.102 [2024-12-07 05:46:47.269855] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.102 [2024-12-07 05:46:47.269870] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.102 [2024-12-07 05:46:47.269877] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.102 [2024-12-07 05:46:47.269884] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.102 [2024-12-07 05:46:47.269897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.102 qpair failed and we were unable to recover it. 00:31:44.102 [2024-12-07 05:46:47.279791] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.102 [2024-12-07 05:46:47.279843] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.102 [2024-12-07 05:46:47.279857] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.102 [2024-12-07 05:46:47.279864] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.102 [2024-12-07 05:46:47.279871] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.102 [2024-12-07 05:46:47.279884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.102 qpair failed and we were unable to recover it. 00:31:44.102 [2024-12-07 05:46:47.289811] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.102 [2024-12-07 05:46:47.289855] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.102 [2024-12-07 05:46:47.289871] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.102 [2024-12-07 05:46:47.289878] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.102 [2024-12-07 05:46:47.289884] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.102 [2024-12-07 05:46:47.289898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.102 qpair failed and we were unable to recover it. 00:31:44.102 [2024-12-07 05:46:47.299828] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.102 [2024-12-07 05:46:47.299880] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.102 [2024-12-07 05:46:47.299895] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.102 [2024-12-07 05:46:47.299902] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.102 [2024-12-07 05:46:47.299909] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.102 [2024-12-07 05:46:47.299923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.102 qpair failed and we were unable to recover it. 00:31:44.102 [2024-12-07 05:46:47.309801] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.102 [2024-12-07 05:46:47.309863] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.102 [2024-12-07 05:46:47.309877] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.102 [2024-12-07 05:46:47.309884] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.102 [2024-12-07 05:46:47.309891] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.102 [2024-12-07 05:46:47.309904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.102 qpair failed and we were unable to recover it. 00:31:44.102 [2024-12-07 05:46:47.319900] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.102 [2024-12-07 05:46:47.319955] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.102 [2024-12-07 05:46:47.319973] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.102 [2024-12-07 05:46:47.319981] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.102 [2024-12-07 05:46:47.319987] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.102 [2024-12-07 05:46:47.320000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.102 qpair failed and we were unable to recover it. 00:31:44.102 [2024-12-07 05:46:47.329929] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.102 [2024-12-07 05:46:47.329976] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.102 [2024-12-07 05:46:47.329990] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.102 [2024-12-07 05:46:47.329997] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.103 [2024-12-07 05:46:47.330004] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.103 [2024-12-07 05:46:47.330023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.103 qpair failed and we were unable to recover it. 00:31:44.365 [2024-12-07 05:46:47.339972] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.365 [2024-12-07 05:46:47.340026] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.365 [2024-12-07 05:46:47.340041] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.365 [2024-12-07 05:46:47.340048] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.365 [2024-12-07 05:46:47.340055] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.365 [2024-12-07 05:46:47.340069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.365 qpair failed and we were unable to recover it. 00:31:44.365 [2024-12-07 05:46:47.349961] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.365 [2024-12-07 05:46:47.350060] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.365 [2024-12-07 05:46:47.350074] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.365 [2024-12-07 05:46:47.350082] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.365 [2024-12-07 05:46:47.350088] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.365 [2024-12-07 05:46:47.350102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.365 qpair failed and we were unable to recover it. 00:31:44.365 [2024-12-07 05:46:47.359994] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.365 [2024-12-07 05:46:47.360053] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.365 [2024-12-07 05:46:47.360068] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.365 [2024-12-07 05:46:47.360077] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.365 [2024-12-07 05:46:47.360084] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.365 [2024-12-07 05:46:47.360105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.365 qpair failed and we were unable to recover it. 00:31:44.365 [2024-12-07 05:46:47.370038] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.365 [2024-12-07 05:46:47.370088] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.365 [2024-12-07 05:46:47.370102] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.365 [2024-12-07 05:46:47.370110] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.365 [2024-12-07 05:46:47.370116] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.365 [2024-12-07 05:46:47.370131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.365 qpair failed and we were unable to recover it. 00:31:44.365 [2024-12-07 05:46:47.380044] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.365 [2024-12-07 05:46:47.380098] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.365 [2024-12-07 05:46:47.380113] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.365 [2024-12-07 05:46:47.380121] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.365 [2024-12-07 05:46:47.380127] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.365 [2024-12-07 05:46:47.380141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.365 qpair failed and we were unable to recover it. 00:31:44.365 [2024-12-07 05:46:47.390126] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.365 [2024-12-07 05:46:47.390212] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.366 [2024-12-07 05:46:47.390226] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.366 [2024-12-07 05:46:47.390233] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.366 [2024-12-07 05:46:47.390239] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.366 [2024-12-07 05:46:47.390253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.366 qpair failed and we were unable to recover it. 00:31:44.366 [2024-12-07 05:46:47.400026] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.366 [2024-12-07 05:46:47.400076] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.366 [2024-12-07 05:46:47.400090] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.366 [2024-12-07 05:46:47.400097] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.366 [2024-12-07 05:46:47.400103] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.366 [2024-12-07 05:46:47.400117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.366 qpair failed and we were unable to recover it. 00:31:44.366 [2024-12-07 05:46:47.409999] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.366 [2024-12-07 05:46:47.410052] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.366 [2024-12-07 05:46:47.410069] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.366 [2024-12-07 05:46:47.410076] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.366 [2024-12-07 05:46:47.410083] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.366 [2024-12-07 05:46:47.410097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.366 qpair failed and we were unable to recover it. 00:31:44.366 [2024-12-07 05:46:47.420157] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.366 [2024-12-07 05:46:47.420202] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.366 [2024-12-07 05:46:47.420215] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.366 [2024-12-07 05:46:47.420222] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.366 [2024-12-07 05:46:47.420229] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.366 [2024-12-07 05:46:47.420242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.366 qpair failed and we were unable to recover it. 00:31:44.366 [2024-12-07 05:46:47.430235] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.366 [2024-12-07 05:46:47.430301] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.366 [2024-12-07 05:46:47.430315] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.366 [2024-12-07 05:46:47.430322] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.366 [2024-12-07 05:46:47.430329] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.366 [2024-12-07 05:46:47.430342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.366 qpair failed and we were unable to recover it. 00:31:44.366 [2024-12-07 05:46:47.440223] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.366 [2024-12-07 05:46:47.440282] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.366 [2024-12-07 05:46:47.440296] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.366 [2024-12-07 05:46:47.440303] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.366 [2024-12-07 05:46:47.440310] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.366 [2024-12-07 05:46:47.440323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.366 qpair failed and we were unable to recover it. 00:31:44.366 [2024-12-07 05:46:47.450249] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.366 [2024-12-07 05:46:47.450298] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.366 [2024-12-07 05:46:47.450313] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.366 [2024-12-07 05:46:47.450320] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.366 [2024-12-07 05:46:47.450327] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.366 [2024-12-07 05:46:47.450343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.366 qpair failed and we were unable to recover it. 00:31:44.366 [2024-12-07 05:46:47.460261] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.366 [2024-12-07 05:46:47.460308] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.366 [2024-12-07 05:46:47.460323] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.366 [2024-12-07 05:46:47.460330] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.366 [2024-12-07 05:46:47.460336] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.366 [2024-12-07 05:46:47.460349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.366 qpair failed and we were unable to recover it. 00:31:44.366 [2024-12-07 05:46:47.470388] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.366 [2024-12-07 05:46:47.470459] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.366 [2024-12-07 05:46:47.470474] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.366 [2024-12-07 05:46:47.470481] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.366 [2024-12-07 05:46:47.470487] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.366 [2024-12-07 05:46:47.470501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.366 qpair failed and we were unable to recover it. 00:31:44.366 [2024-12-07 05:46:47.480292] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.366 [2024-12-07 05:46:47.480347] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.366 [2024-12-07 05:46:47.480361] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.366 [2024-12-07 05:46:47.480369] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.366 [2024-12-07 05:46:47.480375] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.366 [2024-12-07 05:46:47.480389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.366 qpair failed and we were unable to recover it. 00:31:44.366 [2024-12-07 05:46:47.490335] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.366 [2024-12-07 05:46:47.490387] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.366 [2024-12-07 05:46:47.490400] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.366 [2024-12-07 05:46:47.490408] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.366 [2024-12-07 05:46:47.490414] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.366 [2024-12-07 05:46:47.490427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.366 qpair failed and we were unable to recover it. 00:31:44.366 [2024-12-07 05:46:47.500437] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.366 [2024-12-07 05:46:47.500516] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.366 [2024-12-07 05:46:47.500533] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.366 [2024-12-07 05:46:47.500540] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.366 [2024-12-07 05:46:47.500547] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.366 [2024-12-07 05:46:47.500560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.366 qpair failed and we were unable to recover it. 00:31:44.366 [2024-12-07 05:46:47.510445] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.366 [2024-12-07 05:46:47.510507] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.366 [2024-12-07 05:46:47.510521] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.366 [2024-12-07 05:46:47.510529] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.366 [2024-12-07 05:46:47.510535] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.366 [2024-12-07 05:46:47.510549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.366 qpair failed and we were unable to recover it. 00:31:44.366 [2024-12-07 05:46:47.520364] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.366 [2024-12-07 05:46:47.520415] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.366 [2024-12-07 05:46:47.520428] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.367 [2024-12-07 05:46:47.520435] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.367 [2024-12-07 05:46:47.520441] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.367 [2024-12-07 05:46:47.520455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.367 qpair failed and we were unable to recover it. 00:31:44.367 [2024-12-07 05:46:47.530482] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.367 [2024-12-07 05:46:47.530532] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.367 [2024-12-07 05:46:47.530546] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.367 [2024-12-07 05:46:47.530553] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.367 [2024-12-07 05:46:47.530560] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.367 [2024-12-07 05:46:47.530572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.367 qpair failed and we were unable to recover it. 00:31:44.367 [2024-12-07 05:46:47.540479] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.367 [2024-12-07 05:46:47.540523] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.367 [2024-12-07 05:46:47.540538] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.367 [2024-12-07 05:46:47.540545] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.367 [2024-12-07 05:46:47.540552] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.367 [2024-12-07 05:46:47.540568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.367 qpair failed and we were unable to recover it. 00:31:44.367 [2024-12-07 05:46:47.550579] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.367 [2024-12-07 05:46:47.550639] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.367 [2024-12-07 05:46:47.550654] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.367 [2024-12-07 05:46:47.550661] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.367 [2024-12-07 05:46:47.550668] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.367 [2024-12-07 05:46:47.550681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.367 qpair failed and we were unable to recover it. 00:31:44.367 [2024-12-07 05:46:47.560526] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.367 [2024-12-07 05:46:47.560576] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.367 [2024-12-07 05:46:47.560590] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.367 [2024-12-07 05:46:47.560597] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.367 [2024-12-07 05:46:47.560604] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.367 [2024-12-07 05:46:47.560617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.367 qpair failed and we were unable to recover it. 00:31:44.367 [2024-12-07 05:46:47.570561] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.367 [2024-12-07 05:46:47.570608] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.367 [2024-12-07 05:46:47.570622] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.367 [2024-12-07 05:46:47.570630] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.367 [2024-12-07 05:46:47.570637] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.367 [2024-12-07 05:46:47.570650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.367 qpair failed and we were unable to recover it. 00:31:44.367 [2024-12-07 05:46:47.580605] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.367 [2024-12-07 05:46:47.580660] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.367 [2024-12-07 05:46:47.580675] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.367 [2024-12-07 05:46:47.580682] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.367 [2024-12-07 05:46:47.580689] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.367 [2024-12-07 05:46:47.580702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.367 qpair failed and we were unable to recover it. 00:31:44.367 [2024-12-07 05:46:47.590695] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.367 [2024-12-07 05:46:47.590758] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.367 [2024-12-07 05:46:47.590776] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.367 [2024-12-07 05:46:47.590783] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.367 [2024-12-07 05:46:47.590790] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.367 [2024-12-07 05:46:47.590803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.367 qpair failed and we were unable to recover it. 00:31:44.367 [2024-12-07 05:46:47.600670] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.367 [2024-12-07 05:46:47.600725] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.367 [2024-12-07 05:46:47.600741] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.367 [2024-12-07 05:46:47.600749] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.367 [2024-12-07 05:46:47.600755] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.367 [2024-12-07 05:46:47.600768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.367 qpair failed and we were unable to recover it. 00:31:44.693 [2024-12-07 05:46:47.610728] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.693 [2024-12-07 05:46:47.610805] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.693 [2024-12-07 05:46:47.610831] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.693 [2024-12-07 05:46:47.610840] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.693 [2024-12-07 05:46:47.610848] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.693 [2024-12-07 05:46:47.610866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.693 qpair failed and we were unable to recover it. 00:31:44.693 [2024-12-07 05:46:47.620721] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.693 [2024-12-07 05:46:47.620790] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.694 [2024-12-07 05:46:47.620816] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.694 [2024-12-07 05:46:47.620825] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.694 [2024-12-07 05:46:47.620832] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.694 [2024-12-07 05:46:47.620851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.694 qpair failed and we were unable to recover it. 00:31:44.694 [2024-12-07 05:46:47.630808] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.694 [2024-12-07 05:46:47.630888] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.694 [2024-12-07 05:46:47.630915] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.694 [2024-12-07 05:46:47.630924] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.694 [2024-12-07 05:46:47.630931] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.694 [2024-12-07 05:46:47.630955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.694 qpair failed and we were unable to recover it. 00:31:44.694 [2024-12-07 05:46:47.640778] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.694 [2024-12-07 05:46:47.640833] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.694 [2024-12-07 05:46:47.640849] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.694 [2024-12-07 05:46:47.640857] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.694 [2024-12-07 05:46:47.640864] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.694 [2024-12-07 05:46:47.640878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.694 qpair failed and we were unable to recover it. 00:31:44.694 [2024-12-07 05:46:47.650790] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.694 [2024-12-07 05:46:47.650847] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.694 [2024-12-07 05:46:47.650862] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.694 [2024-12-07 05:46:47.650869] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.694 [2024-12-07 05:46:47.650876] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.694 [2024-12-07 05:46:47.650889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.694 qpair failed and we were unable to recover it. 00:31:44.694 [2024-12-07 05:46:47.660829] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.694 [2024-12-07 05:46:47.660883] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.694 [2024-12-07 05:46:47.660897] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.694 [2024-12-07 05:46:47.660904] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.694 [2024-12-07 05:46:47.660911] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.694 [2024-12-07 05:46:47.660924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.694 qpair failed and we were unable to recover it. 00:31:44.694 [2024-12-07 05:46:47.670900] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.694 [2024-12-07 05:46:47.670959] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.694 [2024-12-07 05:46:47.670974] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.694 [2024-12-07 05:46:47.670981] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.694 [2024-12-07 05:46:47.670987] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.694 [2024-12-07 05:46:47.671001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.694 qpair failed and we were unable to recover it. 00:31:44.694 [2024-12-07 05:46:47.680760] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.694 [2024-12-07 05:46:47.680819] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.694 [2024-12-07 05:46:47.680838] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.694 [2024-12-07 05:46:47.680846] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.694 [2024-12-07 05:46:47.680853] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.694 [2024-12-07 05:46:47.680867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.694 qpair failed and we were unable to recover it. 00:31:44.694 [2024-12-07 05:46:47.690771] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.694 [2024-12-07 05:46:47.690819] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.694 [2024-12-07 05:46:47.690835] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.694 [2024-12-07 05:46:47.690842] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.694 [2024-12-07 05:46:47.690849] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.694 [2024-12-07 05:46:47.690863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.694 qpair failed and we were unable to recover it. 00:31:44.694 [2024-12-07 05:46:47.700940] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.694 [2024-12-07 05:46:47.700989] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.694 [2024-12-07 05:46:47.701005] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.694 [2024-12-07 05:46:47.701018] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.694 [2024-12-07 05:46:47.701025] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.694 [2024-12-07 05:46:47.701039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.694 qpair failed and we were unable to recover it. 00:31:44.694 [2024-12-07 05:46:47.711021] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.694 [2024-12-07 05:46:47.711084] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.694 [2024-12-07 05:46:47.711098] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.694 [2024-12-07 05:46:47.711105] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.694 [2024-12-07 05:46:47.711112] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.694 [2024-12-07 05:46:47.711126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.694 qpair failed and we were unable to recover it. 00:31:44.694 [2024-12-07 05:46:47.720859] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.694 [2024-12-07 05:46:47.720916] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.694 [2024-12-07 05:46:47.720931] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.694 [2024-12-07 05:46:47.720938] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.694 [2024-12-07 05:46:47.720945] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.694 [2024-12-07 05:46:47.720962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.694 qpair failed and we were unable to recover it. 00:31:44.694 [2024-12-07 05:46:47.730993] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.694 [2024-12-07 05:46:47.731046] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.694 [2024-12-07 05:46:47.731059] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.694 [2024-12-07 05:46:47.731067] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.694 [2024-12-07 05:46:47.731073] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.694 [2024-12-07 05:46:47.731087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.694 qpair failed and we were unable to recover it. 00:31:44.694 [2024-12-07 05:46:47.741060] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.694 [2024-12-07 05:46:47.741110] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.694 [2024-12-07 05:46:47.741124] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.694 [2024-12-07 05:46:47.741131] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.694 [2024-12-07 05:46:47.741138] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.694 [2024-12-07 05:46:47.741152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.694 qpair failed and we were unable to recover it. 00:31:44.694 [2024-12-07 05:46:47.751131] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.694 [2024-12-07 05:46:47.751189] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.694 [2024-12-07 05:46:47.751205] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.694 [2024-12-07 05:46:47.751212] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.695 [2024-12-07 05:46:47.751219] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.695 [2024-12-07 05:46:47.751232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.695 qpair failed and we were unable to recover it. 00:31:44.695 [2024-12-07 05:46:47.761102] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.695 [2024-12-07 05:46:47.761153] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.695 [2024-12-07 05:46:47.761167] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.695 [2024-12-07 05:46:47.761174] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.695 [2024-12-07 05:46:47.761180] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.695 [2024-12-07 05:46:47.761194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.695 qpair failed and we were unable to recover it. 00:31:44.695 [2024-12-07 05:46:47.771210] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.695 [2024-12-07 05:46:47.771278] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.695 [2024-12-07 05:46:47.771296] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.695 [2024-12-07 05:46:47.771303] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.695 [2024-12-07 05:46:47.771310] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.695 [2024-12-07 05:46:47.771323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.695 qpair failed and we were unable to recover it. 00:31:44.695 [2024-12-07 05:46:47.781179] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.695 [2024-12-07 05:46:47.781228] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.695 [2024-12-07 05:46:47.781243] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.695 [2024-12-07 05:46:47.781250] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.695 [2024-12-07 05:46:47.781257] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.695 [2024-12-07 05:46:47.781270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.695 qpair failed and we were unable to recover it. 00:31:44.695 [2024-12-07 05:46:47.791236] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.695 [2024-12-07 05:46:47.791298] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.695 [2024-12-07 05:46:47.791312] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.695 [2024-12-07 05:46:47.791320] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.695 [2024-12-07 05:46:47.791326] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.695 [2024-12-07 05:46:47.791340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.695 qpair failed and we were unable to recover it. 00:31:44.695 [2024-12-07 05:46:47.801161] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.695 [2024-12-07 05:46:47.801246] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.695 [2024-12-07 05:46:47.801260] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.695 [2024-12-07 05:46:47.801267] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.695 [2024-12-07 05:46:47.801274] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.695 [2024-12-07 05:46:47.801288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.695 qpair failed and we were unable to recover it. 00:31:44.695 [2024-12-07 05:46:47.811142] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.695 [2024-12-07 05:46:47.811190] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.695 [2024-12-07 05:46:47.811204] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.695 [2024-12-07 05:46:47.811212] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.695 [2024-12-07 05:46:47.811222] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.695 [2024-12-07 05:46:47.811235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.695 qpair failed and we were unable to recover it. 00:31:44.695 [2024-12-07 05:46:47.821300] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.695 [2024-12-07 05:46:47.821347] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.695 [2024-12-07 05:46:47.821361] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.695 [2024-12-07 05:46:47.821368] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.695 [2024-12-07 05:46:47.821375] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.695 [2024-12-07 05:46:47.821389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.695 qpair failed and we were unable to recover it. 00:31:44.695 [2024-12-07 05:46:47.831358] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.695 [2024-12-07 05:46:47.831430] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.695 [2024-12-07 05:46:47.831444] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.695 [2024-12-07 05:46:47.831451] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.695 [2024-12-07 05:46:47.831458] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.695 [2024-12-07 05:46:47.831471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.695 qpair failed and we were unable to recover it. 00:31:44.695 [2024-12-07 05:46:47.841369] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.695 [2024-12-07 05:46:47.841420] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.695 [2024-12-07 05:46:47.841433] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.695 [2024-12-07 05:46:47.841440] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.695 [2024-12-07 05:46:47.841447] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.695 [2024-12-07 05:46:47.841461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.695 qpair failed and we were unable to recover it. 00:31:44.695 [2024-12-07 05:46:47.851349] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.695 [2024-12-07 05:46:47.851397] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.695 [2024-12-07 05:46:47.851412] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.695 [2024-12-07 05:46:47.851419] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.695 [2024-12-07 05:46:47.851426] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.695 [2024-12-07 05:46:47.851439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.695 qpair failed and we were unable to recover it. 00:31:44.695 [2024-12-07 05:46:47.861313] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.695 [2024-12-07 05:46:47.861359] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.695 [2024-12-07 05:46:47.861376] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.695 [2024-12-07 05:46:47.861384] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.695 [2024-12-07 05:46:47.861390] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.695 [2024-12-07 05:46:47.861404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.695 qpair failed and we were unable to recover it. 00:31:44.695 [2024-12-07 05:46:47.871492] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.695 [2024-12-07 05:46:47.871561] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.695 [2024-12-07 05:46:47.871575] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.695 [2024-12-07 05:46:47.871582] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.695 [2024-12-07 05:46:47.871589] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.695 [2024-12-07 05:46:47.871603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.695 qpair failed and we were unable to recover it. 00:31:44.695 [2024-12-07 05:46:47.881466] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.695 [2024-12-07 05:46:47.881523] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.695 [2024-12-07 05:46:47.881537] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.695 [2024-12-07 05:46:47.881544] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.695 [2024-12-07 05:46:47.881551] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.695 [2024-12-07 05:46:47.881564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.695 qpair failed and we were unable to recover it. 00:31:44.696 [2024-12-07 05:46:47.891482] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.696 [2024-12-07 05:46:47.891527] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.696 [2024-12-07 05:46:47.891542] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.696 [2024-12-07 05:46:47.891549] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.696 [2024-12-07 05:46:47.891555] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718380 00:31:44.696 [2024-12-07 05:46:47.891569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:44.696 qpair failed and we were unable to recover it. 00:31:44.696 Read completed with error (sct=0, sc=8) 00:31:44.696 starting I/O failed 00:31:44.696 Read completed with error (sct=0, sc=8) 00:31:44.696 starting I/O failed 00:31:44.696 Read completed with error (sct=0, sc=8) 00:31:44.696 starting I/O failed 00:31:44.696 Read completed with error (sct=0, sc=8) 00:31:44.696 starting I/O failed 00:31:44.696 Read completed with error (sct=0, sc=8) 00:31:44.696 starting I/O failed 00:31:44.696 Read completed with error (sct=0, sc=8) 00:31:44.696 starting I/O failed 00:31:44.696 Write completed with error (sct=0, sc=8) 00:31:44.696 starting I/O failed 00:31:44.696 Write completed with error (sct=0, sc=8) 00:31:44.696 starting I/O failed 00:31:44.696 Read completed with error (sct=0, sc=8) 00:31:44.696 starting I/O failed 00:31:44.696 Write completed with error (sct=0, sc=8) 00:31:44.696 starting I/O failed 00:31:44.696 Write completed with error (sct=0, sc=8) 00:31:44.696 starting I/O failed 00:31:44.696 Read completed with error (sct=0, sc=8) 00:31:44.696 starting I/O failed 00:31:44.696 Write completed with error (sct=0, sc=8) 00:31:44.696 starting I/O failed 00:31:44.696 Write completed with error (sct=0, sc=8) 00:31:44.696 starting I/O failed 00:31:44.696 Write completed with error (sct=0, sc=8) 00:31:44.696 starting I/O failed 00:31:44.696 Read completed with error (sct=0, sc=8) 00:31:44.696 starting I/O failed 00:31:44.696 Write completed with error (sct=0, sc=8) 00:31:44.696 starting I/O failed 00:31:44.696 Write completed with error (sct=0, sc=8) 00:31:44.696 starting I/O failed 00:31:44.696 Write completed with error (sct=0, sc=8) 00:31:44.696 starting I/O failed 00:31:44.696 Read completed with error (sct=0, sc=8) 00:31:44.696 starting I/O failed 00:31:44.696 Read completed with error (sct=0, sc=8) 00:31:44.696 starting I/O failed 00:31:44.696 Read completed with error (sct=0, sc=8) 00:31:44.696 starting I/O failed 00:31:44.696 Read completed with error (sct=0, sc=8) 00:31:44.696 starting I/O failed 00:31:44.696 Read completed with error (sct=0, sc=8) 00:31:44.696 starting I/O failed 00:31:44.696 Write completed with error (sct=0, sc=8) 00:31:44.696 starting I/O failed 00:31:44.696 Read completed with error (sct=0, sc=8) 00:31:44.696 starting I/O failed 00:31:44.696 Read completed with error (sct=0, sc=8) 00:31:44.696 starting I/O failed 00:31:44.696 Read completed with error (sct=0, sc=8) 00:31:44.696 starting I/O failed 00:31:44.696 Read completed with error (sct=0, sc=8) 00:31:44.696 starting I/O failed 00:31:44.696 Write completed with error (sct=0, sc=8) 00:31:44.696 starting I/O failed 00:31:44.696 Read completed with error (sct=0, sc=8) 00:31:44.696 starting I/O failed 00:31:44.696 Write completed with error (sct=0, sc=8) 00:31:44.696 starting I/O failed 00:31:44.696 [2024-12-07 05:46:47.891917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:44.696 [2024-12-07 05:46:47.901518] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.696 [2024-12-07 05:46:47.901567] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.696 [2024-12-07 05:46:47.901582] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.696 [2024-12-07 05:46:47.901588] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.696 [2024-12-07 05:46:47.901593] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffb18000b90 00:31:44.696 [2024-12-07 05:46:47.901606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:44.696 qpair failed and we were unable to recover it. 00:31:44.961 [2024-12-07 05:46:47.911592] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.961 [2024-12-07 05:46:47.911661] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.961 [2024-12-07 05:46:47.911672] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.961 [2024-12-07 05:46:47.911678] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.961 [2024-12-07 05:46:47.911683] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffb18000b90 00:31:44.961 [2024-12-07 05:46:47.911694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:44.961 qpair failed and we were unable to recover it. 00:31:44.961 [2024-12-07 05:46:47.912124] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x725e40 is same with the state(5) to be set 00:31:44.961 [2024-12-07 05:46:47.921573] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.961 [2024-12-07 05:46:47.921693] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.961 [2024-12-07 05:46:47.921759] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.961 [2024-12-07 05:46:47.921786] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.961 [2024-12-07 05:46:47.921818] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffb14000b90 00:31:44.961 [2024-12-07 05:46:47.921871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:44.961 qpair failed and we were unable to recover it. 00:31:44.961 [2024-12-07 05:46:47.931600] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.961 [2024-12-07 05:46:47.931675] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.961 [2024-12-07 05:46:47.931707] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.961 [2024-12-07 05:46:47.931723] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.961 [2024-12-07 05:46:47.931738] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffb14000b90 00:31:44.961 [2024-12-07 05:46:47.931770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:44.961 qpair failed and we were unable to recover it. 00:31:44.961 [2024-12-07 05:46:47.941618] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.961 [2024-12-07 05:46:47.941779] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.961 [2024-12-07 05:46:47.941846] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.961 [2024-12-07 05:46:47.941872] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.961 [2024-12-07 05:46:47.941892] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffb20000b90 00:31:44.961 [2024-12-07 05:46:47.941947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:44.961 qpair failed and we were unable to recover it. 00:31:44.961 [2024-12-07 05:46:47.951632] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:44.961 [2024-12-07 05:46:47.951725] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:44.961 [2024-12-07 05:46:47.951776] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:44.961 [2024-12-07 05:46:47.951794] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:44.961 [2024-12-07 05:46:47.951809] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffb20000b90 00:31:44.961 [2024-12-07 05:46:47.951851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:44.961 qpair failed and we were unable to recover it. 00:31:44.962 [2024-12-07 05:46:47.952409] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x725e40 (9): Bad file descriptor 00:31:44.962 Initializing NVMe Controllers 00:31:44.962 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:44.962 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:44.962 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:31:44.962 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:31:44.962 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:31:44.962 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:31:44.962 Initialization complete. Launching workers. 00:31:44.962 Starting thread on core 1 00:31:44.962 Starting thread on core 2 00:31:44.962 Starting thread on core 3 00:31:44.962 Starting thread on core 0 00:31:44.962 05:46:47 -- host/target_disconnect.sh@59 -- # sync 00:31:44.962 00:31:44.962 real 0m11.415s 00:31:44.962 user 0m21.466s 00:31:44.962 sys 0m3.325s 00:31:44.962 05:46:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:44.962 05:46:47 -- common/autotest_common.sh@10 -- # set +x 00:31:44.962 ************************************ 00:31:44.962 END TEST nvmf_target_disconnect_tc2 00:31:44.962 ************************************ 00:31:44.962 05:46:48 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:31:44.962 05:46:48 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:31:44.962 05:46:48 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:31:44.962 05:46:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:44.962 05:46:48 -- nvmf/common.sh@116 -- # sync 00:31:44.962 05:46:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:44.962 05:46:48 -- nvmf/common.sh@119 -- # set +e 00:31:44.962 05:46:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:44.962 05:46:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:44.962 rmmod nvme_tcp 00:31:44.962 rmmod nvme_fabrics 00:31:44.962 rmmod nvme_keyring 00:31:44.962 05:46:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:44.962 05:46:48 -- nvmf/common.sh@123 -- # set -e 00:31:44.962 05:46:48 -- nvmf/common.sh@124 -- # return 0 00:31:44.962 05:46:48 -- nvmf/common.sh@477 -- # '[' -n 2035653 ']' 00:31:44.962 05:46:48 -- nvmf/common.sh@478 -- # killprocess 2035653 00:31:44.962 05:46:48 -- common/autotest_common.sh@936 -- # '[' -z 2035653 ']' 00:31:44.962 05:46:48 -- common/autotest_common.sh@940 -- # kill -0 2035653 00:31:44.962 05:46:48 -- common/autotest_common.sh@941 -- # uname 00:31:44.962 05:46:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:44.962 05:46:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2035653 00:31:44.962 05:46:48 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:31:44.962 05:46:48 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:31:44.962 05:46:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2035653' 00:31:44.962 killing process with pid 2035653 00:31:44.962 05:46:48 -- common/autotest_common.sh@955 -- # kill 2035653 00:31:44.962 05:46:48 -- common/autotest_common.sh@960 -- # wait 2035653 00:31:45.224 05:46:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:45.225 05:46:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:45.225 05:46:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:45.225 05:46:48 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:45.225 05:46:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:45.225 05:46:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:45.225 05:46:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:45.225 05:46:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.140 05:46:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:47.140 00:31:47.140 real 0m21.697s 00:31:47.140 user 0m49.437s 00:31:47.140 sys 0m9.362s 00:31:47.140 05:46:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:47.140 05:46:50 -- common/autotest_common.sh@10 -- # set +x 00:31:47.140 ************************************ 00:31:47.140 END TEST nvmf_target_disconnect 00:31:47.140 ************************************ 00:31:47.401 05:46:50 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:31:47.401 05:46:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:47.401 05:46:50 -- common/autotest_common.sh@10 -- # set +x 00:31:47.401 05:46:50 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:31:47.401 00:31:47.401 real 24m57.937s 00:31:47.401 user 65m51.752s 00:31:47.401 sys 7m0.897s 00:31:47.401 05:46:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:47.401 05:46:50 -- common/autotest_common.sh@10 -- # set +x 00:31:47.401 ************************************ 00:31:47.401 END TEST nvmf_tcp 00:31:47.401 ************************************ 00:31:47.401 05:46:50 -- spdk/autotest.sh@283 -- # [[ 0 -eq 0 ]] 00:31:47.401 05:46:50 -- spdk/autotest.sh@284 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:47.401 05:46:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:31:47.401 05:46:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:47.402 05:46:50 -- common/autotest_common.sh@10 -- # set +x 00:31:47.402 ************************************ 00:31:47.402 START TEST spdkcli_nvmf_tcp 00:31:47.402 ************************************ 00:31:47.402 05:46:50 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:47.402 * Looking for test storage... 00:31:47.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:31:47.402 05:46:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:31:47.402 05:46:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:31:47.402 05:46:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:31:47.664 05:46:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:31:47.664 05:46:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:31:47.664 05:46:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:31:47.664 05:46:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:31:47.664 05:46:50 -- scripts/common.sh@335 -- # IFS=.-: 00:31:47.664 05:46:50 -- scripts/common.sh@335 -- # read -ra ver1 00:31:47.664 05:46:50 -- scripts/common.sh@336 -- # IFS=.-: 00:31:47.664 05:46:50 -- scripts/common.sh@336 -- # read -ra ver2 00:31:47.664 05:46:50 -- scripts/common.sh@337 -- # local 'op=<' 00:31:47.664 05:46:50 -- scripts/common.sh@339 -- # ver1_l=2 00:31:47.664 05:46:50 -- scripts/common.sh@340 -- # ver2_l=1 00:31:47.664 05:46:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:31:47.664 05:46:50 -- scripts/common.sh@343 -- # case "$op" in 00:31:47.664 05:46:50 -- scripts/common.sh@344 -- # : 1 00:31:47.664 05:46:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:31:47.664 05:46:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:47.664 05:46:50 -- scripts/common.sh@364 -- # decimal 1 00:31:47.664 05:46:50 -- scripts/common.sh@352 -- # local d=1 00:31:47.664 05:46:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:47.664 05:46:50 -- scripts/common.sh@354 -- # echo 1 00:31:47.664 05:46:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:31:47.664 05:46:50 -- scripts/common.sh@365 -- # decimal 2 00:31:47.664 05:46:50 -- scripts/common.sh@352 -- # local d=2 00:31:47.664 05:46:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:47.664 05:46:50 -- scripts/common.sh@354 -- # echo 2 00:31:47.664 05:46:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:31:47.664 05:46:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:31:47.664 05:46:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:31:47.664 05:46:50 -- scripts/common.sh@367 -- # return 0 00:31:47.664 05:46:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:47.664 05:46:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:31:47.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.664 --rc genhtml_branch_coverage=1 00:31:47.664 --rc genhtml_function_coverage=1 00:31:47.664 --rc genhtml_legend=1 00:31:47.664 --rc geninfo_all_blocks=1 00:31:47.664 --rc geninfo_unexecuted_blocks=1 00:31:47.664 00:31:47.664 ' 00:31:47.664 05:46:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:31:47.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.664 --rc genhtml_branch_coverage=1 00:31:47.664 --rc genhtml_function_coverage=1 00:31:47.664 --rc genhtml_legend=1 00:31:47.664 --rc geninfo_all_blocks=1 00:31:47.664 --rc geninfo_unexecuted_blocks=1 00:31:47.664 00:31:47.664 ' 00:31:47.664 05:46:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:31:47.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.664 --rc genhtml_branch_coverage=1 00:31:47.664 --rc genhtml_function_coverage=1 00:31:47.664 --rc genhtml_legend=1 00:31:47.664 --rc geninfo_all_blocks=1 00:31:47.664 --rc geninfo_unexecuted_blocks=1 00:31:47.664 00:31:47.664 ' 00:31:47.664 05:46:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:31:47.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.664 --rc genhtml_branch_coverage=1 00:31:47.664 --rc genhtml_function_coverage=1 00:31:47.664 --rc genhtml_legend=1 00:31:47.664 --rc geninfo_all_blocks=1 00:31:47.664 --rc geninfo_unexecuted_blocks=1 00:31:47.664 00:31:47.664 ' 00:31:47.664 05:46:50 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:31:47.664 05:46:50 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:31:47.664 05:46:50 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:31:47.664 05:46:50 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:47.664 05:46:50 -- nvmf/common.sh@7 -- # uname -s 00:31:47.664 05:46:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:47.664 05:46:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:47.664 05:46:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:47.664 05:46:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:47.664 05:46:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:47.664 05:46:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:47.664 05:46:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:47.664 05:46:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:47.664 05:46:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:47.664 05:46:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:47.664 05:46:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:47.664 05:46:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:47.664 05:46:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:47.664 05:46:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:47.665 05:46:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:47.665 05:46:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:47.665 05:46:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:47.665 05:46:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:47.665 05:46:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:47.665 05:46:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.665 05:46:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.665 05:46:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.665 05:46:50 -- paths/export.sh@5 -- # export PATH 00:31:47.665 05:46:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.665 05:46:50 -- nvmf/common.sh@46 -- # : 0 00:31:47.665 05:46:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:47.665 05:46:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:47.665 05:46:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:47.665 05:46:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:47.665 05:46:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:47.665 05:46:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:47.665 05:46:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:47.665 05:46:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:47.665 05:46:50 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:31:47.665 05:46:50 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:31:47.665 05:46:50 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:31:47.665 05:46:50 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:31:47.665 05:46:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:47.665 05:46:50 -- common/autotest_common.sh@10 -- # set +x 00:31:47.665 05:46:50 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:31:47.665 05:46:50 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2037499 00:31:47.665 05:46:50 -- spdkcli/common.sh@34 -- # waitforlisten 2037499 00:31:47.665 05:46:50 -- common/autotest_common.sh@829 -- # '[' -z 2037499 ']' 00:31:47.665 05:46:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:47.665 05:46:50 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:31:47.665 05:46:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:47.665 05:46:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:47.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:47.665 05:46:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:47.665 05:46:50 -- common/autotest_common.sh@10 -- # set +x 00:31:47.665 [2024-12-07 05:46:50.760713] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:31:47.665 [2024-12-07 05:46:50.760794] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2037499 ] 00:31:47.665 EAL: No free 2048 kB hugepages reported on node 1 00:31:47.665 [2024-12-07 05:46:50.826939] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:47.665 [2024-12-07 05:46:50.900411] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:47.665 [2024-12-07 05:46:50.900660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:47.665 [2024-12-07 05:46:50.900661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:48.605 05:46:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:48.605 05:46:51 -- common/autotest_common.sh@862 -- # return 0 00:31:48.605 05:46:51 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:31:48.605 05:46:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:48.605 05:46:51 -- common/autotest_common.sh@10 -- # set +x 00:31:48.605 05:46:51 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:31:48.605 05:46:51 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:31:48.605 05:46:51 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:31:48.605 05:46:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:48.605 05:46:51 -- common/autotest_common.sh@10 -- # set +x 00:31:48.605 05:46:51 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:31:48.605 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:31:48.605 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:31:48.605 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:31:48.605 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:31:48.605 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:31:48.605 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:31:48.605 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:48.605 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:31:48.605 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:31:48.605 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:48.605 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:48.605 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:31:48.605 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:48.605 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:48.605 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:31:48.605 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:48.605 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:48.605 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:48.605 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:48.605 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:31:48.605 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:31:48.605 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:48.605 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:31:48.605 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:48.605 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:31:48.605 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:31:48.605 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:31:48.605 ' 00:31:48.865 [2024-12-07 05:46:51.918321] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:31:50.770 [2024-12-07 05:46:53.947931] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:52.150 [2024-12-07 05:46:55.123828] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:31:54.055 [2024-12-07 05:46:57.286341] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:31:55.961 [2024-12-07 05:46:59.143954] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:31:57.347 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:31:57.347 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:31:57.347 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:31:57.347 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:31:57.347 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:31:57.347 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:31:57.348 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:31:57.348 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:57.348 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:31:57.348 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:31:57.348 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:57.348 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:57.348 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:31:57.348 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:57.348 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:57.348 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:31:57.348 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:57.348 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:57.348 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:57.348 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:57.348 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:31:57.348 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:31:57.348 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:57.348 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:31:57.348 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:57.348 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:31:57.348 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:31:57.348 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:31:57.610 05:47:00 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:31:57.610 05:47:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:57.610 05:47:00 -- common/autotest_common.sh@10 -- # set +x 00:31:57.610 05:47:00 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:31:57.610 05:47:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:57.610 05:47:00 -- common/autotest_common.sh@10 -- # set +x 00:31:57.610 05:47:00 -- spdkcli/nvmf.sh@69 -- # check_match 00:31:57.610 05:47:00 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:31:57.871 05:47:01 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:31:58.132 05:47:01 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:31:58.132 05:47:01 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:31:58.132 05:47:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:58.132 05:47:01 -- common/autotest_common.sh@10 -- # set +x 00:31:58.132 05:47:01 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:31:58.132 05:47:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:58.132 05:47:01 -- common/autotest_common.sh@10 -- # set +x 00:31:58.132 05:47:01 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:31:58.132 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:31:58.132 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:58.132 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:31:58.132 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:31:58.132 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:31:58.133 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:31:58.133 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:58.133 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:31:58.133 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:31:58.133 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:31:58.133 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:31:58.133 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:31:58.133 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:31:58.133 ' 00:32:03.438 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:32:03.438 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:32:03.438 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:03.438 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:32:03.438 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:32:03.438 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:32:03.438 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:32:03.438 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:03.438 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:32:03.438 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:32:03.438 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:32:03.438 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:32:03.438 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:32:03.438 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:32:03.438 05:47:06 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:32:03.438 05:47:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:03.438 05:47:06 -- common/autotest_common.sh@10 -- # set +x 00:32:03.438 05:47:06 -- spdkcli/nvmf.sh@90 -- # killprocess 2037499 00:32:03.438 05:47:06 -- common/autotest_common.sh@936 -- # '[' -z 2037499 ']' 00:32:03.438 05:47:06 -- common/autotest_common.sh@940 -- # kill -0 2037499 00:32:03.438 05:47:06 -- common/autotest_common.sh@941 -- # uname 00:32:03.438 05:47:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:03.438 05:47:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2037499 00:32:03.438 05:47:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:03.438 05:47:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:03.438 05:47:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2037499' 00:32:03.438 killing process with pid 2037499 00:32:03.438 05:47:06 -- common/autotest_common.sh@955 -- # kill 2037499 00:32:03.438 [2024-12-07 05:47:06.199168] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:32:03.438 05:47:06 -- common/autotest_common.sh@960 -- # wait 2037499 00:32:03.438 05:47:06 -- spdkcli/nvmf.sh@1 -- # cleanup 00:32:03.438 05:47:06 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:32:03.438 05:47:06 -- spdkcli/common.sh@13 -- # '[' -n 2037499 ']' 00:32:03.438 05:47:06 -- spdkcli/common.sh@14 -- # killprocess 2037499 00:32:03.438 05:47:06 -- common/autotest_common.sh@936 -- # '[' -z 2037499 ']' 00:32:03.438 05:47:06 -- common/autotest_common.sh@940 -- # kill -0 2037499 00:32:03.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2037499) - No such process 00:32:03.438 05:47:06 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2037499 is not found' 00:32:03.438 Process with pid 2037499 is not found 00:32:03.438 05:47:06 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:32:03.438 05:47:06 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:03.438 05:47:06 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:03.438 00:32:03.438 real 0m15.850s 00:32:03.438 user 0m32.712s 00:32:03.438 sys 0m0.710s 00:32:03.438 05:47:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:03.438 05:47:06 -- common/autotest_common.sh@10 -- # set +x 00:32:03.438 ************************************ 00:32:03.438 END TEST spdkcli_nvmf_tcp 00:32:03.438 ************************************ 00:32:03.438 05:47:06 -- spdk/autotest.sh@285 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:03.438 05:47:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:32:03.438 05:47:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:03.438 05:47:06 -- common/autotest_common.sh@10 -- # set +x 00:32:03.438 ************************************ 00:32:03.438 START TEST nvmf_identify_passthru 00:32:03.438 ************************************ 00:32:03.438 05:47:06 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:03.438 * Looking for test storage... 00:32:03.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:03.438 05:47:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:32:03.438 05:47:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:32:03.438 05:47:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:32:03.438 05:47:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:32:03.438 05:47:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:32:03.438 05:47:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:32:03.438 05:47:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:32:03.438 05:47:06 -- scripts/common.sh@335 -- # IFS=.-: 00:32:03.438 05:47:06 -- scripts/common.sh@335 -- # read -ra ver1 00:32:03.438 05:47:06 -- scripts/common.sh@336 -- # IFS=.-: 00:32:03.438 05:47:06 -- scripts/common.sh@336 -- # read -ra ver2 00:32:03.438 05:47:06 -- scripts/common.sh@337 -- # local 'op=<' 00:32:03.438 05:47:06 -- scripts/common.sh@339 -- # ver1_l=2 00:32:03.438 05:47:06 -- scripts/common.sh@340 -- # ver2_l=1 00:32:03.439 05:47:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:32:03.439 05:47:06 -- scripts/common.sh@343 -- # case "$op" in 00:32:03.439 05:47:06 -- scripts/common.sh@344 -- # : 1 00:32:03.439 05:47:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:32:03.439 05:47:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:03.439 05:47:06 -- scripts/common.sh@364 -- # decimal 1 00:32:03.439 05:47:06 -- scripts/common.sh@352 -- # local d=1 00:32:03.439 05:47:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:03.439 05:47:06 -- scripts/common.sh@354 -- # echo 1 00:32:03.439 05:47:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:32:03.439 05:47:06 -- scripts/common.sh@365 -- # decimal 2 00:32:03.439 05:47:06 -- scripts/common.sh@352 -- # local d=2 00:32:03.439 05:47:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:03.439 05:47:06 -- scripts/common.sh@354 -- # echo 2 00:32:03.439 05:47:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:32:03.439 05:47:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:32:03.439 05:47:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:32:03.439 05:47:06 -- scripts/common.sh@367 -- # return 0 00:32:03.439 05:47:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:03.439 05:47:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:32:03.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:03.439 --rc genhtml_branch_coverage=1 00:32:03.439 --rc genhtml_function_coverage=1 00:32:03.439 --rc genhtml_legend=1 00:32:03.439 --rc geninfo_all_blocks=1 00:32:03.439 --rc geninfo_unexecuted_blocks=1 00:32:03.439 00:32:03.439 ' 00:32:03.439 05:47:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:32:03.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:03.439 --rc genhtml_branch_coverage=1 00:32:03.439 --rc genhtml_function_coverage=1 00:32:03.439 --rc genhtml_legend=1 00:32:03.439 --rc geninfo_all_blocks=1 00:32:03.439 --rc geninfo_unexecuted_blocks=1 00:32:03.439 00:32:03.439 ' 00:32:03.439 05:47:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:32:03.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:03.439 --rc genhtml_branch_coverage=1 00:32:03.439 --rc genhtml_function_coverage=1 00:32:03.439 --rc genhtml_legend=1 00:32:03.439 --rc geninfo_all_blocks=1 00:32:03.439 --rc geninfo_unexecuted_blocks=1 00:32:03.439 00:32:03.439 ' 00:32:03.439 05:47:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:32:03.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:03.439 --rc genhtml_branch_coverage=1 00:32:03.439 --rc genhtml_function_coverage=1 00:32:03.439 --rc genhtml_legend=1 00:32:03.439 --rc geninfo_all_blocks=1 00:32:03.439 --rc geninfo_unexecuted_blocks=1 00:32:03.439 00:32:03.439 ' 00:32:03.439 05:47:06 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:03.439 05:47:06 -- nvmf/common.sh@7 -- # uname -s 00:32:03.439 05:47:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:03.439 05:47:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:03.439 05:47:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:03.439 05:47:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:03.439 05:47:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:03.439 05:47:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:03.439 05:47:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:03.439 05:47:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:03.439 05:47:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:03.439 05:47:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:03.439 05:47:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:03.439 05:47:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:03.439 05:47:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:03.439 05:47:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:03.439 05:47:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:03.439 05:47:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:03.439 05:47:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:03.439 05:47:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:03.439 05:47:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:03.439 05:47:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.439 05:47:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.439 05:47:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.439 05:47:06 -- paths/export.sh@5 -- # export PATH 00:32:03.439 05:47:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.439 05:47:06 -- nvmf/common.sh@46 -- # : 0 00:32:03.439 05:47:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:03.439 05:47:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:03.439 05:47:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:03.439 05:47:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:03.439 05:47:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:03.439 05:47:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:03.439 05:47:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:03.439 05:47:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:03.439 05:47:06 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:03.439 05:47:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:03.439 05:47:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:03.439 05:47:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:03.439 05:47:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.439 05:47:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.439 05:47:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.439 05:47:06 -- paths/export.sh@5 -- # export PATH 00:32:03.439 05:47:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.439 05:47:06 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:32:03.439 05:47:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:03.439 05:47:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:03.439 05:47:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:03.439 05:47:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:03.439 05:47:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:03.439 05:47:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:03.439 05:47:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:03.439 05:47:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:03.439 05:47:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:03.439 05:47:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:03.439 05:47:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:03.439 05:47:06 -- common/autotest_common.sh@10 -- # set +x 00:32:11.582 05:47:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:11.582 05:47:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:11.582 05:47:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:11.582 05:47:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:11.582 05:47:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:11.582 05:47:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:11.582 05:47:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:11.582 05:47:13 -- nvmf/common.sh@294 -- # net_devs=() 00:32:11.582 05:47:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:11.582 05:47:13 -- nvmf/common.sh@295 -- # e810=() 00:32:11.582 05:47:13 -- nvmf/common.sh@295 -- # local -ga e810 00:32:11.582 05:47:13 -- nvmf/common.sh@296 -- # x722=() 00:32:11.582 05:47:13 -- nvmf/common.sh@296 -- # local -ga x722 00:32:11.582 05:47:13 -- nvmf/common.sh@297 -- # mlx=() 00:32:11.582 05:47:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:11.582 05:47:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:11.582 05:47:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:11.582 05:47:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:11.583 05:47:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:11.583 05:47:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:11.583 05:47:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:11.583 05:47:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:11.583 05:47:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:11.583 05:47:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:11.583 05:47:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:11.583 05:47:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:11.583 05:47:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:11.583 05:47:13 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:11.583 05:47:13 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:11.583 05:47:13 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:11.583 05:47:13 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:11.583 05:47:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:11.583 05:47:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:11.583 05:47:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:11.583 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:11.583 05:47:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:11.583 05:47:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:11.583 05:47:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:11.583 05:47:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:11.583 05:47:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:11.583 05:47:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:11.583 05:47:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:11.583 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:11.583 05:47:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:11.583 05:47:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:11.583 05:47:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:11.583 05:47:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:11.583 05:47:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:11.583 05:47:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:11.583 05:47:13 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:11.583 05:47:13 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:11.583 05:47:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:11.583 05:47:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:11.583 05:47:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:11.583 05:47:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:11.583 05:47:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:11.583 Found net devices under 0000:31:00.0: cvl_0_0 00:32:11.583 05:47:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:11.583 05:47:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:11.583 05:47:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:11.583 05:47:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:11.583 05:47:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:11.583 05:47:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:11.583 Found net devices under 0000:31:00.1: cvl_0_1 00:32:11.583 05:47:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:11.583 05:47:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:11.583 05:47:13 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:11.583 05:47:13 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:11.583 05:47:13 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:11.583 05:47:13 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:11.583 05:47:13 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:11.583 05:47:13 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:11.583 05:47:13 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:11.583 05:47:13 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:11.583 05:47:13 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:11.583 05:47:13 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:11.583 05:47:13 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:11.583 05:47:13 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:11.583 05:47:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:11.583 05:47:13 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:11.583 05:47:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:11.583 05:47:13 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:11.583 05:47:13 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:11.583 05:47:13 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:11.583 05:47:13 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:11.583 05:47:13 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:11.583 05:47:13 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:11.583 05:47:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:11.583 05:47:13 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:11.583 05:47:13 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:11.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:11.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:32:11.583 00:32:11.583 --- 10.0.0.2 ping statistics --- 00:32:11.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:11.583 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:32:11.583 05:47:13 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:11.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:11.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:32:11.583 00:32:11.583 --- 10.0.0.1 ping statistics --- 00:32:11.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:11.583 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:32:11.583 05:47:13 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:11.583 05:47:13 -- nvmf/common.sh@410 -- # return 0 00:32:11.583 05:47:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:11.583 05:47:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:11.583 05:47:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:11.583 05:47:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:11.583 05:47:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:11.583 05:47:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:11.583 05:47:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:11.583 05:47:13 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:32:11.583 05:47:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:11.583 05:47:13 -- common/autotest_common.sh@10 -- # set +x 00:32:11.583 05:47:13 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:32:11.583 05:47:13 -- common/autotest_common.sh@1519 -- # bdfs=() 00:32:11.583 05:47:13 -- common/autotest_common.sh@1519 -- # local bdfs 00:32:11.583 05:47:13 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:32:11.583 05:47:13 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:32:11.583 05:47:13 -- common/autotest_common.sh@1508 -- # bdfs=() 00:32:11.583 05:47:13 -- common/autotest_common.sh@1508 -- # local bdfs 00:32:11.583 05:47:13 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:11.583 05:47:13 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:11.583 05:47:13 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:32:11.583 05:47:14 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:32:11.583 05:47:14 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:65:00.0 00:32:11.583 05:47:14 -- common/autotest_common.sh@1522 -- # echo 0000:65:00.0 00:32:11.583 05:47:14 -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:32:11.583 05:47:14 -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:32:11.583 05:47:14 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:32:11.583 05:47:14 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:32:11.583 05:47:14 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:32:11.583 EAL: No free 2048 kB hugepages reported on node 1 00:32:11.583 05:47:14 -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:32:11.583 05:47:14 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:32:11.583 05:47:14 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:32:11.583 05:47:14 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:32:11.583 EAL: No free 2048 kB hugepages reported on node 1 00:32:11.844 05:47:15 -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:32:11.844 05:47:15 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:32:11.844 05:47:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:11.844 05:47:15 -- common/autotest_common.sh@10 -- # set +x 00:32:11.844 05:47:15 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:32:11.844 05:47:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:11.844 05:47:15 -- common/autotest_common.sh@10 -- # set +x 00:32:11.844 05:47:15 -- target/identify_passthru.sh@31 -- # nvmfpid=2044692 00:32:11.844 05:47:15 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:11.845 05:47:15 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:11.845 05:47:15 -- target/identify_passthru.sh@35 -- # waitforlisten 2044692 00:32:11.845 05:47:15 -- common/autotest_common.sh@829 -- # '[' -z 2044692 ']' 00:32:11.845 05:47:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:11.845 05:47:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:11.845 05:47:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:11.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:11.845 05:47:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:11.845 05:47:15 -- common/autotest_common.sh@10 -- # set +x 00:32:12.104 [2024-12-07 05:47:15.104127] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:32:12.104 [2024-12-07 05:47:15.104181] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:12.104 EAL: No free 2048 kB hugepages reported on node 1 00:32:12.104 [2024-12-07 05:47:15.174777] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:12.104 [2024-12-07 05:47:15.244091] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:12.104 [2024-12-07 05:47:15.244225] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:12.104 [2024-12-07 05:47:15.244239] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:12.104 [2024-12-07 05:47:15.244247] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:12.104 [2024-12-07 05:47:15.244420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:12.104 [2024-12-07 05:47:15.244536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:12.104 [2024-12-07 05:47:15.244694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:12.104 [2024-12-07 05:47:15.244695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:12.671 05:47:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:12.671 05:47:15 -- common/autotest_common.sh@862 -- # return 0 00:32:12.671 05:47:15 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:32:12.671 05:47:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.671 05:47:15 -- common/autotest_common.sh@10 -- # set +x 00:32:12.671 INFO: Log level set to 20 00:32:12.671 INFO: Requests: 00:32:12.671 { 00:32:12.671 "jsonrpc": "2.0", 00:32:12.671 "method": "nvmf_set_config", 00:32:12.671 "id": 1, 00:32:12.671 "params": { 00:32:12.671 "admin_cmd_passthru": { 00:32:12.671 "identify_ctrlr": true 00:32:12.671 } 00:32:12.671 } 00:32:12.671 } 00:32:12.671 00:32:12.671 INFO: response: 00:32:12.671 { 00:32:12.671 "jsonrpc": "2.0", 00:32:12.671 "id": 1, 00:32:12.671 "result": true 00:32:12.671 } 00:32:12.671 00:32:12.671 05:47:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.671 05:47:15 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:32:12.671 05:47:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.671 05:47:15 -- common/autotest_common.sh@10 -- # set +x 00:32:12.931 INFO: Setting log level to 20 00:32:12.931 INFO: Setting log level to 20 00:32:12.931 INFO: Log level set to 20 00:32:12.931 INFO: Log level set to 20 00:32:12.931 INFO: Requests: 00:32:12.931 { 00:32:12.931 "jsonrpc": "2.0", 00:32:12.931 "method": "framework_start_init", 00:32:12.931 "id": 1 00:32:12.931 } 00:32:12.931 00:32:12.931 INFO: Requests: 00:32:12.931 { 00:32:12.931 "jsonrpc": "2.0", 00:32:12.931 "method": "framework_start_init", 00:32:12.931 "id": 1 00:32:12.931 } 00:32:12.931 00:32:12.931 [2024-12-07 05:47:15.972442] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:32:12.931 INFO: response: 00:32:12.931 { 00:32:12.931 "jsonrpc": "2.0", 00:32:12.931 "id": 1, 00:32:12.931 "result": true 00:32:12.932 } 00:32:12.932 00:32:12.932 INFO: response: 00:32:12.932 { 00:32:12.932 "jsonrpc": "2.0", 00:32:12.932 "id": 1, 00:32:12.932 "result": true 00:32:12.932 } 00:32:12.932 00:32:12.932 05:47:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.932 05:47:15 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:12.932 05:47:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.932 05:47:15 -- common/autotest_common.sh@10 -- # set +x 00:32:12.932 INFO: Setting log level to 40 00:32:12.932 INFO: Setting log level to 40 00:32:12.932 INFO: Setting log level to 40 00:32:12.932 [2024-12-07 05:47:15.985691] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:12.932 05:47:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.932 05:47:15 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:32:12.932 05:47:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:12.932 05:47:15 -- common/autotest_common.sh@10 -- # set +x 00:32:12.932 05:47:16 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:32:12.932 05:47:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.932 05:47:16 -- common/autotest_common.sh@10 -- # set +x 00:32:13.191 Nvme0n1 00:32:13.191 05:47:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.191 05:47:16 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:32:13.191 05:47:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.191 05:47:16 -- common/autotest_common.sh@10 -- # set +x 00:32:13.191 05:47:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.191 05:47:16 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:13.191 05:47:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.191 05:47:16 -- common/autotest_common.sh@10 -- # set +x 00:32:13.192 05:47:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.192 05:47:16 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:13.192 05:47:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.192 05:47:16 -- common/autotest_common.sh@10 -- # set +x 00:32:13.192 [2024-12-07 05:47:16.371365] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:13.192 05:47:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.192 05:47:16 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:32:13.192 05:47:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.192 05:47:16 -- common/autotest_common.sh@10 -- # set +x 00:32:13.192 [2024-12-07 05:47:16.383138] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:32:13.192 [ 00:32:13.192 { 00:32:13.192 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:13.192 "subtype": "Discovery", 00:32:13.192 "listen_addresses": [], 00:32:13.192 "allow_any_host": true, 00:32:13.192 "hosts": [] 00:32:13.192 }, 00:32:13.192 { 00:32:13.192 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:13.192 "subtype": "NVMe", 00:32:13.192 "listen_addresses": [ 00:32:13.192 { 00:32:13.192 "transport": "TCP", 00:32:13.192 "trtype": "TCP", 00:32:13.192 "adrfam": "IPv4", 00:32:13.192 "traddr": "10.0.0.2", 00:32:13.192 "trsvcid": "4420" 00:32:13.192 } 00:32:13.192 ], 00:32:13.192 "allow_any_host": true, 00:32:13.192 "hosts": [], 00:32:13.192 "serial_number": "SPDK00000000000001", 00:32:13.192 "model_number": "SPDK bdev Controller", 00:32:13.192 "max_namespaces": 1, 00:32:13.192 "min_cntlid": 1, 00:32:13.192 "max_cntlid": 65519, 00:32:13.192 "namespaces": [ 00:32:13.192 { 00:32:13.192 "nsid": 1, 00:32:13.192 "bdev_name": "Nvme0n1", 00:32:13.192 "name": "Nvme0n1", 00:32:13.192 "nguid": "3634473052605494002538450000002D", 00:32:13.192 "uuid": "36344730-5260-5494-0025-38450000002d" 00:32:13.192 } 00:32:13.192 ] 00:32:13.192 } 00:32:13.192 ] 00:32:13.192 05:47:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.192 05:47:16 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:13.192 05:47:16 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:32:13.192 05:47:16 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:32:13.451 EAL: No free 2048 kB hugepages reported on node 1 00:32:13.451 05:47:16 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:32:13.451 05:47:16 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:13.451 05:47:16 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:32:13.451 05:47:16 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:32:13.451 EAL: No free 2048 kB hugepages reported on node 1 00:32:13.710 05:47:16 -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:32:13.710 05:47:16 -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:32:13.710 05:47:16 -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:32:13.710 05:47:16 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:13.710 05:47:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.710 05:47:16 -- common/autotest_common.sh@10 -- # set +x 00:32:13.710 05:47:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.710 05:47:16 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:32:13.710 05:47:16 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:32:13.710 05:47:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:13.710 05:47:16 -- nvmf/common.sh@116 -- # sync 00:32:13.710 05:47:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:13.710 05:47:16 -- nvmf/common.sh@119 -- # set +e 00:32:13.710 05:47:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:13.710 05:47:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:13.710 rmmod nvme_tcp 00:32:13.710 rmmod nvme_fabrics 00:32:13.710 rmmod nvme_keyring 00:32:13.710 05:47:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:13.710 05:47:16 -- nvmf/common.sh@123 -- # set -e 00:32:13.710 05:47:16 -- nvmf/common.sh@124 -- # return 0 00:32:13.710 05:47:16 -- nvmf/common.sh@477 -- # '[' -n 2044692 ']' 00:32:13.710 05:47:16 -- nvmf/common.sh@478 -- # killprocess 2044692 00:32:13.710 05:47:16 -- common/autotest_common.sh@936 -- # '[' -z 2044692 ']' 00:32:13.710 05:47:16 -- common/autotest_common.sh@940 -- # kill -0 2044692 00:32:13.710 05:47:16 -- common/autotest_common.sh@941 -- # uname 00:32:13.710 05:47:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:13.710 05:47:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2044692 00:32:13.710 05:47:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:13.710 05:47:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:13.710 05:47:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2044692' 00:32:13.710 killing process with pid 2044692 00:32:13.710 05:47:16 -- common/autotest_common.sh@955 -- # kill 2044692 00:32:13.710 [2024-12-07 05:47:16.847844] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:32:13.710 05:47:16 -- common/autotest_common.sh@960 -- # wait 2044692 00:32:13.970 05:47:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:13.970 05:47:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:13.970 05:47:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:13.970 05:47:17 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:13.970 05:47:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:13.970 05:47:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:13.970 05:47:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:13.970 05:47:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:16.511 05:47:19 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:16.511 00:32:16.511 real 0m12.793s 00:32:16.511 user 0m9.901s 00:32:16.511 sys 0m6.199s 00:32:16.511 05:47:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:16.511 05:47:19 -- common/autotest_common.sh@10 -- # set +x 00:32:16.511 ************************************ 00:32:16.511 END TEST nvmf_identify_passthru 00:32:16.511 ************************************ 00:32:16.511 05:47:19 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:16.511 05:47:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:16.511 05:47:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:16.511 05:47:19 -- common/autotest_common.sh@10 -- # set +x 00:32:16.511 ************************************ 00:32:16.511 START TEST nvmf_dif 00:32:16.511 ************************************ 00:32:16.511 05:47:19 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:16.511 * Looking for test storage... 00:32:16.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:16.511 05:47:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:32:16.511 05:47:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:32:16.511 05:47:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:32:16.511 05:47:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:32:16.512 05:47:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:32:16.512 05:47:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:32:16.512 05:47:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:32:16.512 05:47:19 -- scripts/common.sh@335 -- # IFS=.-: 00:32:16.512 05:47:19 -- scripts/common.sh@335 -- # read -ra ver1 00:32:16.512 05:47:19 -- scripts/common.sh@336 -- # IFS=.-: 00:32:16.512 05:47:19 -- scripts/common.sh@336 -- # read -ra ver2 00:32:16.512 05:47:19 -- scripts/common.sh@337 -- # local 'op=<' 00:32:16.512 05:47:19 -- scripts/common.sh@339 -- # ver1_l=2 00:32:16.512 05:47:19 -- scripts/common.sh@340 -- # ver2_l=1 00:32:16.512 05:47:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:32:16.512 05:47:19 -- scripts/common.sh@343 -- # case "$op" in 00:32:16.512 05:47:19 -- scripts/common.sh@344 -- # : 1 00:32:16.512 05:47:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:32:16.512 05:47:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:16.512 05:47:19 -- scripts/common.sh@364 -- # decimal 1 00:32:16.512 05:47:19 -- scripts/common.sh@352 -- # local d=1 00:32:16.512 05:47:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:16.512 05:47:19 -- scripts/common.sh@354 -- # echo 1 00:32:16.512 05:47:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:32:16.512 05:47:19 -- scripts/common.sh@365 -- # decimal 2 00:32:16.512 05:47:19 -- scripts/common.sh@352 -- # local d=2 00:32:16.512 05:47:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:16.512 05:47:19 -- scripts/common.sh@354 -- # echo 2 00:32:16.512 05:47:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:32:16.512 05:47:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:32:16.512 05:47:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:32:16.512 05:47:19 -- scripts/common.sh@367 -- # return 0 00:32:16.512 05:47:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:16.512 05:47:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:32:16.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.512 --rc genhtml_branch_coverage=1 00:32:16.512 --rc genhtml_function_coverage=1 00:32:16.512 --rc genhtml_legend=1 00:32:16.512 --rc geninfo_all_blocks=1 00:32:16.512 --rc geninfo_unexecuted_blocks=1 00:32:16.512 00:32:16.512 ' 00:32:16.512 05:47:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:32:16.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.512 --rc genhtml_branch_coverage=1 00:32:16.512 --rc genhtml_function_coverage=1 00:32:16.512 --rc genhtml_legend=1 00:32:16.512 --rc geninfo_all_blocks=1 00:32:16.512 --rc geninfo_unexecuted_blocks=1 00:32:16.512 00:32:16.512 ' 00:32:16.512 05:47:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:32:16.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.512 --rc genhtml_branch_coverage=1 00:32:16.512 --rc genhtml_function_coverage=1 00:32:16.512 --rc genhtml_legend=1 00:32:16.512 --rc geninfo_all_blocks=1 00:32:16.512 --rc geninfo_unexecuted_blocks=1 00:32:16.512 00:32:16.512 ' 00:32:16.512 05:47:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:32:16.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.512 --rc genhtml_branch_coverage=1 00:32:16.512 --rc genhtml_function_coverage=1 00:32:16.512 --rc genhtml_legend=1 00:32:16.512 --rc geninfo_all_blocks=1 00:32:16.512 --rc geninfo_unexecuted_blocks=1 00:32:16.512 00:32:16.512 ' 00:32:16.512 05:47:19 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:16.512 05:47:19 -- nvmf/common.sh@7 -- # uname -s 00:32:16.512 05:47:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:16.512 05:47:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:16.512 05:47:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:16.512 05:47:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:16.512 05:47:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:16.512 05:47:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:16.512 05:47:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:16.512 05:47:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:16.512 05:47:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:16.512 05:47:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:16.512 05:47:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:16.512 05:47:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:16.512 05:47:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:16.512 05:47:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:16.512 05:47:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:16.512 05:47:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:16.512 05:47:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:16.512 05:47:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:16.512 05:47:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:16.512 05:47:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.512 05:47:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.512 05:47:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.512 05:47:19 -- paths/export.sh@5 -- # export PATH 00:32:16.512 05:47:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.512 05:47:19 -- nvmf/common.sh@46 -- # : 0 00:32:16.512 05:47:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:16.512 05:47:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:16.512 05:47:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:16.512 05:47:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:16.512 05:47:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:16.512 05:47:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:16.512 05:47:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:16.512 05:47:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:16.512 05:47:19 -- target/dif.sh@15 -- # NULL_META=16 00:32:16.512 05:47:19 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:32:16.512 05:47:19 -- target/dif.sh@15 -- # NULL_SIZE=64 00:32:16.512 05:47:19 -- target/dif.sh@15 -- # NULL_DIF=1 00:32:16.512 05:47:19 -- target/dif.sh@135 -- # nvmftestinit 00:32:16.512 05:47:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:16.513 05:47:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:16.513 05:47:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:16.513 05:47:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:16.513 05:47:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:16.513 05:47:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:16.513 05:47:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:16.513 05:47:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:16.513 05:47:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:16.513 05:47:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:16.513 05:47:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:16.513 05:47:19 -- common/autotest_common.sh@10 -- # set +x 00:32:23.102 05:47:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:23.102 05:47:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:23.102 05:47:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:23.102 05:47:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:23.102 05:47:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:23.102 05:47:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:23.102 05:47:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:23.102 05:47:25 -- nvmf/common.sh@294 -- # net_devs=() 00:32:23.102 05:47:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:23.102 05:47:25 -- nvmf/common.sh@295 -- # e810=() 00:32:23.102 05:47:25 -- nvmf/common.sh@295 -- # local -ga e810 00:32:23.102 05:47:25 -- nvmf/common.sh@296 -- # x722=() 00:32:23.102 05:47:25 -- nvmf/common.sh@296 -- # local -ga x722 00:32:23.102 05:47:25 -- nvmf/common.sh@297 -- # mlx=() 00:32:23.102 05:47:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:23.102 05:47:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:23.102 05:47:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:23.102 05:47:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:23.102 05:47:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:23.102 05:47:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:23.102 05:47:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:23.102 05:47:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:23.102 05:47:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:23.102 05:47:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:23.102 05:47:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:23.102 05:47:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:23.102 05:47:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:23.102 05:47:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:23.102 05:47:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:23.102 05:47:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:23.102 05:47:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:23.102 05:47:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:23.102 05:47:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:23.102 05:47:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:23.102 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:23.102 05:47:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:23.102 05:47:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:23.103 05:47:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:23.103 05:47:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:23.103 05:47:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:23.103 05:47:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:23.103 05:47:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:23.103 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:23.103 05:47:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:23.103 05:47:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:23.103 05:47:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:23.103 05:47:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:23.103 05:47:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:23.103 05:47:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:23.103 05:47:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:23.103 05:47:25 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:23.103 05:47:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:23.103 05:47:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:23.103 05:47:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:23.103 05:47:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:23.103 05:47:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:23.103 Found net devices under 0000:31:00.0: cvl_0_0 00:32:23.103 05:47:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:23.103 05:47:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:23.103 05:47:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:23.103 05:47:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:23.103 05:47:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:23.103 05:47:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:23.103 Found net devices under 0000:31:00.1: cvl_0_1 00:32:23.103 05:47:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:23.103 05:47:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:23.103 05:47:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:23.103 05:47:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:23.103 05:47:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:23.103 05:47:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:23.103 05:47:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:23.103 05:47:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:23.103 05:47:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:23.103 05:47:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:23.103 05:47:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:23.103 05:47:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:23.103 05:47:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:23.103 05:47:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:23.103 05:47:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:23.103 05:47:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:23.103 05:47:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:23.103 05:47:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:23.103 05:47:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:23.103 05:47:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:23.103 05:47:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:23.103 05:47:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:23.103 05:47:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:23.103 05:47:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:23.103 05:47:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:23.103 05:47:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:23.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:23.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:32:23.103 00:32:23.103 --- 10.0.0.2 ping statistics --- 00:32:23.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:23.103 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:32:23.103 05:47:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:23.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:23.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:32:23.103 00:32:23.103 --- 10.0.0.1 ping statistics --- 00:32:23.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:23.103 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:32:23.103 05:47:26 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:23.103 05:47:26 -- nvmf/common.sh@410 -- # return 0 00:32:23.103 05:47:26 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:32:23.103 05:47:26 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:27.311 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:32:27.312 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:32:27.312 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:32:27.312 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:32:27.312 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:32:27.312 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:32:27.312 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:32:27.312 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:32:27.312 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:32:27.312 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:32:27.312 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:32:27.312 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:32:27.312 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:32:27.312 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:32:27.312 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:32:27.312 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:32:27.312 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:32:27.312 05:47:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:27.312 05:47:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:27.312 05:47:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:27.312 05:47:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:27.312 05:47:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:27.312 05:47:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:27.312 05:47:30 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:32:27.312 05:47:30 -- target/dif.sh@137 -- # nvmfappstart 00:32:27.312 05:47:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:27.312 05:47:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:27.312 05:47:30 -- common/autotest_common.sh@10 -- # set +x 00:32:27.312 05:47:30 -- nvmf/common.sh@469 -- # nvmfpid=2050682 00:32:27.312 05:47:30 -- nvmf/common.sh@470 -- # waitforlisten 2050682 00:32:27.312 05:47:30 -- common/autotest_common.sh@829 -- # '[' -z 2050682 ']' 00:32:27.312 05:47:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:27.312 05:47:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:27.312 05:47:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:27.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:27.312 05:47:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:27.312 05:47:30 -- common/autotest_common.sh@10 -- # set +x 00:32:27.312 05:47:30 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:32:27.312 [2024-12-07 05:47:30.280249] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:32:27.312 [2024-12-07 05:47:30.280301] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:27.312 EAL: No free 2048 kB hugepages reported on node 1 00:32:27.312 [2024-12-07 05:47:30.349386] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.312 [2024-12-07 05:47:30.417024] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:27.312 [2024-12-07 05:47:30.417142] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:27.312 [2024-12-07 05:47:30.417151] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:27.312 [2024-12-07 05:47:30.417158] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:27.312 [2024-12-07 05:47:30.417177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:27.884 05:47:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:27.884 05:47:31 -- common/autotest_common.sh@862 -- # return 0 00:32:27.884 05:47:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:27.884 05:47:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:27.884 05:47:31 -- common/autotest_common.sh@10 -- # set +x 00:32:27.884 05:47:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:27.884 05:47:31 -- target/dif.sh@139 -- # create_transport 00:32:27.884 05:47:31 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:32:27.884 05:47:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.884 05:47:31 -- common/autotest_common.sh@10 -- # set +x 00:32:27.884 [2024-12-07 05:47:31.088236] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:27.884 05:47:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.884 05:47:31 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:32:27.884 05:47:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:27.884 05:47:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:27.884 05:47:31 -- common/autotest_common.sh@10 -- # set +x 00:32:27.884 ************************************ 00:32:27.884 START TEST fio_dif_1_default 00:32:27.884 ************************************ 00:32:27.884 05:47:31 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:32:27.884 05:47:31 -- target/dif.sh@86 -- # create_subsystems 0 00:32:27.884 05:47:31 -- target/dif.sh@28 -- # local sub 00:32:27.884 05:47:31 -- target/dif.sh@30 -- # for sub in "$@" 00:32:27.884 05:47:31 -- target/dif.sh@31 -- # create_subsystem 0 00:32:27.884 05:47:31 -- target/dif.sh@18 -- # local sub_id=0 00:32:27.884 05:47:31 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:27.884 05:47:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.884 05:47:31 -- common/autotest_common.sh@10 -- # set +x 00:32:27.884 bdev_null0 00:32:27.884 05:47:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.884 05:47:31 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:27.884 05:47:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.884 05:47:31 -- common/autotest_common.sh@10 -- # set +x 00:32:27.884 05:47:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.884 05:47:31 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:27.884 05:47:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.884 05:47:31 -- common/autotest_common.sh@10 -- # set +x 00:32:28.144 05:47:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.144 05:47:31 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:28.144 05:47:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.144 05:47:31 -- common/autotest_common.sh@10 -- # set +x 00:32:28.144 [2024-12-07 05:47:31.132517] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:28.144 05:47:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.144 05:47:31 -- target/dif.sh@87 -- # fio /dev/fd/62 00:32:28.144 05:47:31 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:28.144 05:47:31 -- common/autotest_common.sh@1345 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:28.144 05:47:31 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:32:28.144 05:47:31 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:28.144 05:47:31 -- common/autotest_common.sh@1328 -- # local sanitizers 00:32:28.144 05:47:31 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:28.144 05:47:31 -- common/autotest_common.sh@1330 -- # shift 00:32:28.144 05:47:31 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:32:28.144 05:47:31 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:32:28.144 05:47:31 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:32:28.144 05:47:31 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:28.144 05:47:31 -- target/dif.sh@82 -- # gen_fio_conf 00:32:28.144 05:47:31 -- nvmf/common.sh@520 -- # config=() 00:32:28.144 05:47:31 -- target/dif.sh@54 -- # local file 00:32:28.144 05:47:31 -- nvmf/common.sh@520 -- # local subsystem config 00:32:28.144 05:47:31 -- target/dif.sh@56 -- # cat 00:32:28.144 05:47:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:28.144 05:47:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:28.144 { 00:32:28.144 "params": { 00:32:28.144 "name": "Nvme$subsystem", 00:32:28.144 "trtype": "$TEST_TRANSPORT", 00:32:28.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:28.144 "adrfam": "ipv4", 00:32:28.144 "trsvcid": "$NVMF_PORT", 00:32:28.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:28.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:28.144 "hdgst": ${hdgst:-false}, 00:32:28.144 "ddgst": ${ddgst:-false} 00:32:28.144 }, 00:32:28.144 "method": "bdev_nvme_attach_controller" 00:32:28.144 } 00:32:28.144 EOF 00:32:28.144 )") 00:32:28.144 05:47:31 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:28.144 05:47:31 -- common/autotest_common.sh@1334 -- # grep libasan 00:32:28.144 05:47:31 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:32:28.144 05:47:31 -- nvmf/common.sh@542 -- # cat 00:32:28.144 05:47:31 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:28.144 05:47:31 -- target/dif.sh@72 -- # (( file <= files )) 00:32:28.144 05:47:31 -- nvmf/common.sh@544 -- # jq . 00:32:28.144 05:47:31 -- nvmf/common.sh@545 -- # IFS=, 00:32:28.144 05:47:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:28.144 "params": { 00:32:28.144 "name": "Nvme0", 00:32:28.144 "trtype": "tcp", 00:32:28.144 "traddr": "10.0.0.2", 00:32:28.144 "adrfam": "ipv4", 00:32:28.144 "trsvcid": "4420", 00:32:28.144 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:28.144 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:28.144 "hdgst": false, 00:32:28.144 "ddgst": false 00:32:28.144 }, 00:32:28.144 "method": "bdev_nvme_attach_controller" 00:32:28.144 }' 00:32:28.144 05:47:31 -- common/autotest_common.sh@1334 -- # asan_lib= 00:32:28.144 05:47:31 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:32:28.144 05:47:31 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:32:28.144 05:47:31 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:28.144 05:47:31 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:32:28.144 05:47:31 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:32:28.144 05:47:31 -- common/autotest_common.sh@1334 -- # asan_lib= 00:32:28.144 05:47:31 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:32:28.144 05:47:31 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:28.144 05:47:31 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:28.405 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:28.405 fio-3.35 00:32:28.405 Starting 1 thread 00:32:28.405 EAL: No free 2048 kB hugepages reported on node 1 00:32:28.976 [2024-12-07 05:47:32.068811] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:32:28.976 [2024-12-07 05:47:32.068853] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:32:38.969 00:32:38.969 filename0: (groupid=0, jobs=1): err= 0: pid=2051221: Sat Dec 7 05:47:42 2024 00:32:38.969 read: IOPS=97, BW=391KiB/s (401kB/s)(3920KiB/10014msec) 00:32:38.969 slat (nsec): min=5355, max=56556, avg=6243.80, stdev=2186.03 00:32:38.969 clat (usec): min=611, max=44941, avg=40856.38, stdev=2592.92 00:32:38.969 lat (usec): min=619, max=44975, avg=40862.63, stdev=2592.10 00:32:38.969 clat percentiles (usec): 00:32:38.969 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:38.969 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:38.969 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:38.969 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:32:38.969 | 99.99th=[44827] 00:32:38.969 bw ( KiB/s): min= 384, max= 416, per=99.63%, avg=390.40, stdev=13.13, samples=20 00:32:38.969 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:32:38.969 lat (usec) : 750=0.41% 00:32:38.969 lat (msec) : 50=99.59% 00:32:38.969 cpu : usr=94.67%, sys=5.12%, ctx=12, majf=0, minf=265 00:32:38.969 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:38.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:38.969 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:38.970 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:38.970 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:38.970 00:32:38.970 Run status group 0 (all jobs): 00:32:38.970 READ: bw=391KiB/s (401kB/s), 391KiB/s-391KiB/s (401kB/s-401kB/s), io=3920KiB (4014kB), run=10014-10014msec 00:32:39.232 05:47:42 -- target/dif.sh@88 -- # destroy_subsystems 0 00:32:39.232 05:47:42 -- target/dif.sh@43 -- # local sub 00:32:39.232 05:47:42 -- target/dif.sh@45 -- # for sub in "$@" 00:32:39.232 05:47:42 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:39.232 05:47:42 -- target/dif.sh@36 -- # local sub_id=0 00:32:39.232 05:47:42 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:39.232 05:47:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.232 05:47:42 -- common/autotest_common.sh@10 -- # set +x 00:32:39.232 05:47:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.232 05:47:42 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:39.232 05:47:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.232 05:47:42 -- common/autotest_common.sh@10 -- # set +x 00:32:39.232 05:47:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.232 00:32:39.232 real 0m11.251s 00:32:39.232 user 0m26.463s 00:32:39.232 sys 0m0.879s 00:32:39.232 05:47:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:39.232 05:47:42 -- common/autotest_common.sh@10 -- # set +x 00:32:39.232 ************************************ 00:32:39.232 END TEST fio_dif_1_default 00:32:39.232 ************************************ 00:32:39.232 05:47:42 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:32:39.232 05:47:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:39.232 05:47:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:39.232 05:47:42 -- common/autotest_common.sh@10 -- # set +x 00:32:39.232 ************************************ 00:32:39.232 START TEST fio_dif_1_multi_subsystems 00:32:39.232 ************************************ 00:32:39.232 05:47:42 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:32:39.232 05:47:42 -- target/dif.sh@92 -- # local files=1 00:32:39.232 05:47:42 -- target/dif.sh@94 -- # create_subsystems 0 1 00:32:39.232 05:47:42 -- target/dif.sh@28 -- # local sub 00:32:39.232 05:47:42 -- target/dif.sh@30 -- # for sub in "$@" 00:32:39.232 05:47:42 -- target/dif.sh@31 -- # create_subsystem 0 00:32:39.232 05:47:42 -- target/dif.sh@18 -- # local sub_id=0 00:32:39.232 05:47:42 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:39.232 05:47:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.232 05:47:42 -- common/autotest_common.sh@10 -- # set +x 00:32:39.232 bdev_null0 00:32:39.232 05:47:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.232 05:47:42 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:39.232 05:47:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.232 05:47:42 -- common/autotest_common.sh@10 -- # set +x 00:32:39.232 05:47:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.232 05:47:42 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:39.232 05:47:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.232 05:47:42 -- common/autotest_common.sh@10 -- # set +x 00:32:39.232 05:47:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.232 05:47:42 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:39.232 05:47:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.232 05:47:42 -- common/autotest_common.sh@10 -- # set +x 00:32:39.232 [2024-12-07 05:47:42.421801] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:39.232 05:47:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.232 05:47:42 -- target/dif.sh@30 -- # for sub in "$@" 00:32:39.232 05:47:42 -- target/dif.sh@31 -- # create_subsystem 1 00:32:39.232 05:47:42 -- target/dif.sh@18 -- # local sub_id=1 00:32:39.232 05:47:42 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:39.232 05:47:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.232 05:47:42 -- common/autotest_common.sh@10 -- # set +x 00:32:39.232 bdev_null1 00:32:39.232 05:47:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.232 05:47:42 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:39.232 05:47:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.232 05:47:42 -- common/autotest_common.sh@10 -- # set +x 00:32:39.232 05:47:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.232 05:47:42 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:39.232 05:47:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.232 05:47:42 -- common/autotest_common.sh@10 -- # set +x 00:32:39.232 05:47:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.232 05:47:42 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:39.232 05:47:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.232 05:47:42 -- common/autotest_common.sh@10 -- # set +x 00:32:39.232 05:47:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.232 05:47:42 -- target/dif.sh@95 -- # fio /dev/fd/62 00:32:39.232 05:47:42 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:39.232 05:47:42 -- common/autotest_common.sh@1345 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:39.232 05:47:42 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:32:39.232 05:47:42 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:39.232 05:47:42 -- common/autotest_common.sh@1328 -- # local sanitizers 00:32:39.232 05:47:42 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:39.232 05:47:42 -- common/autotest_common.sh@1330 -- # shift 00:32:39.232 05:47:42 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:32:39.232 05:47:42 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:32:39.232 05:47:42 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:32:39.232 05:47:42 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:39.232 05:47:42 -- target/dif.sh@82 -- # gen_fio_conf 00:32:39.232 05:47:42 -- nvmf/common.sh@520 -- # config=() 00:32:39.232 05:47:42 -- target/dif.sh@54 -- # local file 00:32:39.232 05:47:42 -- nvmf/common.sh@520 -- # local subsystem config 00:32:39.232 05:47:42 -- target/dif.sh@56 -- # cat 00:32:39.232 05:47:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:39.232 05:47:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:39.232 { 00:32:39.232 "params": { 00:32:39.232 "name": "Nvme$subsystem", 00:32:39.232 "trtype": "$TEST_TRANSPORT", 00:32:39.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:39.232 "adrfam": "ipv4", 00:32:39.232 "trsvcid": "$NVMF_PORT", 00:32:39.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:39.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:39.232 "hdgst": ${hdgst:-false}, 00:32:39.232 "ddgst": ${ddgst:-false} 00:32:39.232 }, 00:32:39.232 "method": "bdev_nvme_attach_controller" 00:32:39.232 } 00:32:39.232 EOF 00:32:39.232 )") 00:32:39.232 05:47:42 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:39.232 05:47:42 -- common/autotest_common.sh@1334 -- # grep libasan 00:32:39.232 05:47:42 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:32:39.232 05:47:42 -- nvmf/common.sh@542 -- # cat 00:32:39.232 05:47:42 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:39.232 05:47:42 -- target/dif.sh@72 -- # (( file <= files )) 00:32:39.232 05:47:42 -- target/dif.sh@73 -- # cat 00:32:39.232 05:47:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:39.232 05:47:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:39.232 { 00:32:39.232 "params": { 00:32:39.232 "name": "Nvme$subsystem", 00:32:39.232 "trtype": "$TEST_TRANSPORT", 00:32:39.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:39.232 "adrfam": "ipv4", 00:32:39.232 "trsvcid": "$NVMF_PORT", 00:32:39.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:39.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:39.232 "hdgst": ${hdgst:-false}, 00:32:39.232 "ddgst": ${ddgst:-false} 00:32:39.232 }, 00:32:39.232 "method": "bdev_nvme_attach_controller" 00:32:39.232 } 00:32:39.232 EOF 00:32:39.232 )") 00:32:39.494 05:47:42 -- target/dif.sh@72 -- # (( file++ )) 00:32:39.494 05:47:42 -- target/dif.sh@72 -- # (( file <= files )) 00:32:39.494 05:47:42 -- nvmf/common.sh@542 -- # cat 00:32:39.494 05:47:42 -- nvmf/common.sh@544 -- # jq . 00:32:39.494 05:47:42 -- nvmf/common.sh@545 -- # IFS=, 00:32:39.494 05:47:42 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:39.494 "params": { 00:32:39.494 "name": "Nvme0", 00:32:39.494 "trtype": "tcp", 00:32:39.494 "traddr": "10.0.0.2", 00:32:39.494 "adrfam": "ipv4", 00:32:39.494 "trsvcid": "4420", 00:32:39.494 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:39.494 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:39.494 "hdgst": false, 00:32:39.494 "ddgst": false 00:32:39.494 }, 00:32:39.494 "method": "bdev_nvme_attach_controller" 00:32:39.494 },{ 00:32:39.494 "params": { 00:32:39.494 "name": "Nvme1", 00:32:39.494 "trtype": "tcp", 00:32:39.494 "traddr": "10.0.0.2", 00:32:39.494 "adrfam": "ipv4", 00:32:39.494 "trsvcid": "4420", 00:32:39.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:39.494 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:39.494 "hdgst": false, 00:32:39.494 "ddgst": false 00:32:39.494 }, 00:32:39.494 "method": "bdev_nvme_attach_controller" 00:32:39.494 }' 00:32:39.494 05:47:42 -- common/autotest_common.sh@1334 -- # asan_lib= 00:32:39.494 05:47:42 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:32:39.494 05:47:42 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:32:39.494 05:47:42 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:39.494 05:47:42 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:32:39.494 05:47:42 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:32:39.494 05:47:42 -- common/autotest_common.sh@1334 -- # asan_lib= 00:32:39.494 05:47:42 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:32:39.494 05:47:42 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:39.494 05:47:42 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:39.771 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:39.771 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:39.771 fio-3.35 00:32:39.771 Starting 2 threads 00:32:39.771 EAL: No free 2048 kB hugepages reported on node 1 00:32:40.344 [2024-12-07 05:47:43.425378] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:32:40.344 [2024-12-07 05:47:43.425421] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:32:50.350 00:32:50.350 filename0: (groupid=0, jobs=1): err= 0: pid=2053741: Sat Dec 7 05:47:53 2024 00:32:50.350 read: IOPS=190, BW=762KiB/s (781kB/s)(7632KiB/10012msec) 00:32:50.350 slat (nsec): min=5417, max=32249, avg=6682.16, stdev=1885.92 00:32:50.350 clat (usec): min=504, max=42957, avg=20970.74, stdev=20124.57 00:32:50.350 lat (usec): min=512, max=42963, avg=20977.43, stdev=20124.29 00:32:50.350 clat percentiles (usec): 00:32:50.350 | 1.00th=[ 644], 5.00th=[ 807], 10.00th=[ 848], 20.00th=[ 889], 00:32:50.350 | 30.00th=[ 906], 40.00th=[ 930], 50.00th=[ 2671], 60.00th=[41157], 00:32:50.350 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:50.350 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:32:50.350 | 99.99th=[42730] 00:32:50.350 bw ( KiB/s): min= 704, max= 768, per=49.97%, avg=761.26, stdev=17.13, samples=19 00:32:50.350 iops : min= 176, max= 192, avg=190.32, stdev= 4.28, samples=19 00:32:50.350 lat (usec) : 750=2.94%, 1000=45.18% 00:32:50.350 lat (msec) : 2=1.78%, 4=0.21%, 50=49.90% 00:32:50.350 cpu : usr=97.85%, sys=1.92%, ctx=14, majf=0, minf=162 00:32:50.350 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:50.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:50.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:50.350 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:50.350 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:50.350 filename1: (groupid=0, jobs=1): err= 0: pid=2053742: Sat Dec 7 05:47:53 2024 00:32:50.350 read: IOPS=190, BW=761KiB/s (780kB/s)(7616KiB/10003msec) 00:32:50.350 slat (nsec): min=2820, max=34967, avg=5821.30, stdev=1087.21 00:32:50.350 clat (usec): min=492, max=46321, avg=20997.03, stdev=20098.04 00:32:50.350 lat (usec): min=498, max=46339, avg=21002.85, stdev=20097.79 00:32:50.350 clat percentiles (usec): 00:32:50.350 | 1.00th=[ 701], 5.00th=[ 865], 10.00th=[ 898], 20.00th=[ 914], 00:32:50.350 | 30.00th=[ 930], 40.00th=[ 938], 50.00th=[ 1057], 60.00th=[41157], 00:32:50.350 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:50.350 | 99.00th=[41157], 99.50th=[41681], 99.90th=[46400], 99.95th=[46400], 00:32:50.350 | 99.99th=[46400] 00:32:50.350 bw ( KiB/s): min= 672, max= 832, per=50.03%, avg=762.95, stdev=30.66, samples=19 00:32:50.350 iops : min= 168, max= 208, avg=190.74, stdev= 7.67, samples=19 00:32:50.350 lat (usec) : 500=0.11%, 750=2.84%, 1000=46.85% 00:32:50.350 lat (msec) : 2=0.21%, 50=50.00% 00:32:50.350 cpu : usr=97.65%, sys=2.13%, ctx=14, majf=0, minf=192 00:32:50.350 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:50.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:50.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:50.350 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:50.350 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:50.350 00:32:50.350 Run status group 0 (all jobs): 00:32:50.350 READ: bw=1523KiB/s (1560kB/s), 761KiB/s-762KiB/s (780kB/s-781kB/s), io=14.9MiB (15.6MB), run=10003-10012msec 00:32:50.611 05:47:53 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:32:50.612 05:47:53 -- target/dif.sh@43 -- # local sub 00:32:50.612 05:47:53 -- target/dif.sh@45 -- # for sub in "$@" 00:32:50.612 05:47:53 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:50.612 05:47:53 -- target/dif.sh@36 -- # local sub_id=0 00:32:50.612 05:47:53 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:50.612 05:47:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.612 05:47:53 -- common/autotest_common.sh@10 -- # set +x 00:32:50.612 05:47:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.612 05:47:53 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:50.612 05:47:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.612 05:47:53 -- common/autotest_common.sh@10 -- # set +x 00:32:50.612 05:47:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.612 05:47:53 -- target/dif.sh@45 -- # for sub in "$@" 00:32:50.612 05:47:53 -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:50.612 05:47:53 -- target/dif.sh@36 -- # local sub_id=1 00:32:50.612 05:47:53 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:50.612 05:47:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.612 05:47:53 -- common/autotest_common.sh@10 -- # set +x 00:32:50.612 05:47:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.612 05:47:53 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:50.612 05:47:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.612 05:47:53 -- common/autotest_common.sh@10 -- # set +x 00:32:50.612 05:47:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.612 00:32:50.612 real 0m11.336s 00:32:50.612 user 0m34.138s 00:32:50.612 sys 0m0.796s 00:32:50.612 05:47:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:50.612 05:47:53 -- common/autotest_common.sh@10 -- # set +x 00:32:50.612 ************************************ 00:32:50.612 END TEST fio_dif_1_multi_subsystems 00:32:50.612 ************************************ 00:32:50.612 05:47:53 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:32:50.612 05:47:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:50.612 05:47:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:50.612 05:47:53 -- common/autotest_common.sh@10 -- # set +x 00:32:50.612 ************************************ 00:32:50.612 START TEST fio_dif_rand_params 00:32:50.612 ************************************ 00:32:50.612 05:47:53 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:32:50.612 05:47:53 -- target/dif.sh@100 -- # local NULL_DIF 00:32:50.612 05:47:53 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:32:50.612 05:47:53 -- target/dif.sh@103 -- # NULL_DIF=3 00:32:50.612 05:47:53 -- target/dif.sh@103 -- # bs=128k 00:32:50.612 05:47:53 -- target/dif.sh@103 -- # numjobs=3 00:32:50.612 05:47:53 -- target/dif.sh@103 -- # iodepth=3 00:32:50.612 05:47:53 -- target/dif.sh@103 -- # runtime=5 00:32:50.612 05:47:53 -- target/dif.sh@105 -- # create_subsystems 0 00:32:50.612 05:47:53 -- target/dif.sh@28 -- # local sub 00:32:50.612 05:47:53 -- target/dif.sh@30 -- # for sub in "$@" 00:32:50.612 05:47:53 -- target/dif.sh@31 -- # create_subsystem 0 00:32:50.612 05:47:53 -- target/dif.sh@18 -- # local sub_id=0 00:32:50.612 05:47:53 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:50.612 05:47:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.612 05:47:53 -- common/autotest_common.sh@10 -- # set +x 00:32:50.612 bdev_null0 00:32:50.612 05:47:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.612 05:47:53 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:50.612 05:47:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.612 05:47:53 -- common/autotest_common.sh@10 -- # set +x 00:32:50.612 05:47:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.612 05:47:53 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:50.612 05:47:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.612 05:47:53 -- common/autotest_common.sh@10 -- # set +x 00:32:50.612 05:47:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.612 05:47:53 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:50.612 05:47:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.612 05:47:53 -- common/autotest_common.sh@10 -- # set +x 00:32:50.612 [2024-12-07 05:47:53.818400] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:50.612 05:47:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.612 05:47:53 -- target/dif.sh@106 -- # fio /dev/fd/62 00:32:50.612 05:47:53 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:32:50.612 05:47:53 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:50.612 05:47:53 -- nvmf/common.sh@520 -- # config=() 00:32:50.612 05:47:53 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:50.612 05:47:53 -- nvmf/common.sh@520 -- # local subsystem config 00:32:50.612 05:47:53 -- common/autotest_common.sh@1345 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:50.612 05:47:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:50.612 05:47:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:50.612 { 00:32:50.612 "params": { 00:32:50.612 "name": "Nvme$subsystem", 00:32:50.612 "trtype": "$TEST_TRANSPORT", 00:32:50.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:50.612 "adrfam": "ipv4", 00:32:50.612 "trsvcid": "$NVMF_PORT", 00:32:50.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:50.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:50.612 "hdgst": ${hdgst:-false}, 00:32:50.612 "ddgst": ${ddgst:-false} 00:32:50.612 }, 00:32:50.612 "method": "bdev_nvme_attach_controller" 00:32:50.612 } 00:32:50.612 EOF 00:32:50.612 )") 00:32:50.612 05:47:53 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:32:50.612 05:47:53 -- target/dif.sh@82 -- # gen_fio_conf 00:32:50.612 05:47:53 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:50.612 05:47:53 -- target/dif.sh@54 -- # local file 00:32:50.612 05:47:53 -- common/autotest_common.sh@1328 -- # local sanitizers 00:32:50.612 05:47:53 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:50.612 05:47:53 -- target/dif.sh@56 -- # cat 00:32:50.612 05:47:53 -- common/autotest_common.sh@1330 -- # shift 00:32:50.612 05:47:53 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:32:50.612 05:47:53 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:32:50.612 05:47:53 -- nvmf/common.sh@542 -- # cat 00:32:50.612 05:47:53 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:50.612 05:47:53 -- common/autotest_common.sh@1334 -- # grep libasan 00:32:50.612 05:47:53 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:50.612 05:47:53 -- target/dif.sh@72 -- # (( file <= files )) 00:32:50.612 05:47:53 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:32:50.612 05:47:53 -- nvmf/common.sh@544 -- # jq . 00:32:50.612 05:47:53 -- nvmf/common.sh@545 -- # IFS=, 00:32:50.612 05:47:53 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:50.612 "params": { 00:32:50.613 "name": "Nvme0", 00:32:50.613 "trtype": "tcp", 00:32:50.613 "traddr": "10.0.0.2", 00:32:50.613 "adrfam": "ipv4", 00:32:50.613 "trsvcid": "4420", 00:32:50.613 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:50.613 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:50.613 "hdgst": false, 00:32:50.613 "ddgst": false 00:32:50.613 }, 00:32:50.613 "method": "bdev_nvme_attach_controller" 00:32:50.613 }' 00:32:50.875 05:47:53 -- common/autotest_common.sh@1334 -- # asan_lib= 00:32:50.875 05:47:53 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:32:50.875 05:47:53 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:32:50.875 05:47:53 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:50.875 05:47:53 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:32:50.875 05:47:53 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:32:50.875 05:47:53 -- common/autotest_common.sh@1334 -- # asan_lib= 00:32:50.875 05:47:53 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:32:50.875 05:47:53 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:50.875 05:47:53 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:51.136 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:51.136 ... 00:32:51.136 fio-3.35 00:32:51.136 Starting 3 threads 00:32:51.136 EAL: No free 2048 kB hugepages reported on node 1 00:32:51.705 [2024-12-07 05:47:54.738044] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:32:51.705 [2024-12-07 05:47:54.738094] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:32:57.032 00:32:57.032 filename0: (groupid=0, jobs=1): err= 0: pid=2055972: Sat Dec 7 05:47:59 2024 00:32:57.032 read: IOPS=247, BW=30.9MiB/s (32.4MB/s)(156MiB/5047msec) 00:32:57.032 slat (nsec): min=5387, max=34701, avg=7252.85, stdev=1803.90 00:32:57.032 clat (usec): min=6530, max=88464, avg=12098.21, stdev=6009.51 00:32:57.032 lat (usec): min=6536, max=88472, avg=12105.46, stdev=6009.78 00:32:57.032 clat percentiles (usec): 00:32:57.032 | 1.00th=[ 7242], 5.00th=[ 8160], 10.00th=[ 8979], 20.00th=[ 9896], 00:32:57.032 | 30.00th=[10421], 40.00th=[11076], 50.00th=[11469], 60.00th=[11994], 00:32:57.032 | 70.00th=[12256], 80.00th=[12780], 90.00th=[13566], 95.00th=[14353], 00:32:57.032 | 99.00th=[50070], 99.50th=[51119], 99.90th=[52691], 99.95th=[88605], 00:32:57.032 | 99.99th=[88605] 00:32:57.032 bw ( KiB/s): min=24576, max=38912, per=34.16%, avg=31872.00, stdev=3628.92, samples=10 00:32:57.032 iops : min= 192, max= 304, avg=249.00, stdev=28.35, samples=10 00:32:57.032 lat (msec) : 10=22.37%, 20=75.62%, 50=0.88%, 100=1.12% 00:32:57.032 cpu : usr=94.29%, sys=5.45%, ctx=10, majf=0, minf=57 00:32:57.032 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:57.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.033 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.033 issued rwts: total=1247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.033 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:57.033 filename0: (groupid=0, jobs=1): err= 0: pid=2055973: Sat Dec 7 05:47:59 2024 00:32:57.033 read: IOPS=234, BW=29.3MiB/s (30.7MB/s)(148MiB/5046msec) 00:32:57.033 slat (nsec): min=5380, max=63466, avg=7439.02, stdev=2449.71 00:32:57.033 clat (usec): min=5949, max=90939, avg=12751.01, stdev=8124.54 00:32:57.033 lat (usec): min=5957, max=90945, avg=12758.45, stdev=8124.60 00:32:57.033 clat percentiles (usec): 00:32:57.033 | 1.00th=[ 7046], 5.00th=[ 8356], 10.00th=[ 9110], 20.00th=[ 9896], 00:32:57.033 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11207], 60.00th=[11600], 00:32:57.033 | 70.00th=[12256], 80.00th=[12780], 90.00th=[13698], 95.00th=[15139], 00:32:57.033 | 99.00th=[52167], 99.50th=[53216], 99.90th=[53740], 99.95th=[90702], 00:32:57.033 | 99.99th=[90702] 00:32:57.033 bw ( KiB/s): min=19968, max=35328, per=32.38%, avg=30208.00, stdev=5686.04, samples=10 00:32:57.033 iops : min= 156, max= 276, avg=236.00, stdev=44.42, samples=10 00:32:57.033 lat (msec) : 10=21.39%, 20=74.73%, 50=0.93%, 100=2.96% 00:32:57.033 cpu : usr=94.67%, sys=5.07%, ctx=12, majf=0, minf=179 00:32:57.033 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:57.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.033 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.033 issued rwts: total=1183,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.033 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:57.033 filename0: (groupid=0, jobs=1): err= 0: pid=2055974: Sat Dec 7 05:47:59 2024 00:32:57.033 read: IOPS=247, BW=30.9MiB/s (32.4MB/s)(156MiB/5046msec) 00:32:57.033 slat (nsec): min=5373, max=34962, avg=6599.90, stdev=1724.90 00:32:57.033 clat (usec): min=5552, max=88571, avg=12076.72, stdev=5549.24 00:32:57.033 lat (usec): min=5558, max=88580, avg=12083.32, stdev=5549.52 00:32:57.033 clat percentiles (usec): 00:32:57.033 | 1.00th=[ 6980], 5.00th=[ 8029], 10.00th=[ 8848], 20.00th=[ 9765], 00:32:57.033 | 30.00th=[10421], 40.00th=[11076], 50.00th=[11600], 60.00th=[12256], 00:32:57.033 | 70.00th=[12780], 80.00th=[13173], 90.00th=[13960], 95.00th=[14615], 00:32:57.033 | 99.00th=[50594], 99.50th=[52167], 99.90th=[55313], 99.95th=[88605], 00:32:57.033 | 99.99th=[88605] 00:32:57.033 bw ( KiB/s): min=27648, max=36608, per=34.21%, avg=31923.20, stdev=3057.86, samples=10 00:32:57.033 iops : min= 216, max= 286, avg=249.40, stdev=23.89, samples=10 00:32:57.033 lat (msec) : 10=24.02%, 20=74.46%, 50=0.24%, 100=1.28% 00:32:57.033 cpu : usr=95.26%, sys=4.50%, ctx=14, majf=0, minf=130 00:32:57.033 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:57.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.033 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.033 issued rwts: total=1249,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.033 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:57.033 00:32:57.033 Run status group 0 (all jobs): 00:32:57.033 READ: bw=91.1MiB/s (95.5MB/s), 29.3MiB/s-30.9MiB/s (30.7MB/s-32.4MB/s), io=460MiB (482MB), run=5046-5047msec 00:32:57.033 05:48:00 -- target/dif.sh@107 -- # destroy_subsystems 0 00:32:57.033 05:48:00 -- target/dif.sh@43 -- # local sub 00:32:57.033 05:48:00 -- target/dif.sh@45 -- # for sub in "$@" 00:32:57.033 05:48:00 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:57.033 05:48:00 -- target/dif.sh@36 -- # local sub_id=0 00:32:57.033 05:48:00 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:57.033 05:48:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.033 05:48:00 -- common/autotest_common.sh@10 -- # set +x 00:32:57.033 05:48:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.033 05:48:00 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:57.033 05:48:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.033 05:48:00 -- common/autotest_common.sh@10 -- # set +x 00:32:57.033 05:48:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.033 05:48:00 -- target/dif.sh@109 -- # NULL_DIF=2 00:32:57.033 05:48:00 -- target/dif.sh@109 -- # bs=4k 00:32:57.033 05:48:00 -- target/dif.sh@109 -- # numjobs=8 00:32:57.033 05:48:00 -- target/dif.sh@109 -- # iodepth=16 00:32:57.033 05:48:00 -- target/dif.sh@109 -- # runtime= 00:32:57.033 05:48:00 -- target/dif.sh@109 -- # files=2 00:32:57.033 05:48:00 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:32:57.033 05:48:00 -- target/dif.sh@28 -- # local sub 00:32:57.033 05:48:00 -- target/dif.sh@30 -- # for sub in "$@" 00:32:57.033 05:48:00 -- target/dif.sh@31 -- # create_subsystem 0 00:32:57.033 05:48:00 -- target/dif.sh@18 -- # local sub_id=0 00:32:57.033 05:48:00 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:32:57.033 05:48:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.033 05:48:00 -- common/autotest_common.sh@10 -- # set +x 00:32:57.033 bdev_null0 00:32:57.033 05:48:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.033 05:48:00 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:57.033 05:48:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.033 05:48:00 -- common/autotest_common.sh@10 -- # set +x 00:32:57.033 05:48:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.033 05:48:00 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:57.033 05:48:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.033 05:48:00 -- common/autotest_common.sh@10 -- # set +x 00:32:57.033 05:48:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.033 05:48:00 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:57.033 05:48:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.033 05:48:00 -- common/autotest_common.sh@10 -- # set +x 00:32:57.033 [2024-12-07 05:48:00.113480] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:57.033 05:48:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.033 05:48:00 -- target/dif.sh@30 -- # for sub in "$@" 00:32:57.033 05:48:00 -- target/dif.sh@31 -- # create_subsystem 1 00:32:57.033 05:48:00 -- target/dif.sh@18 -- # local sub_id=1 00:32:57.033 05:48:00 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:32:57.033 05:48:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.033 05:48:00 -- common/autotest_common.sh@10 -- # set +x 00:32:57.033 bdev_null1 00:32:57.033 05:48:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.033 05:48:00 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:57.033 05:48:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.033 05:48:00 -- common/autotest_common.sh@10 -- # set +x 00:32:57.033 05:48:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.033 05:48:00 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:57.033 05:48:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.033 05:48:00 -- common/autotest_common.sh@10 -- # set +x 00:32:57.033 05:48:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.033 05:48:00 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:57.033 05:48:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.033 05:48:00 -- common/autotest_common.sh@10 -- # set +x 00:32:57.033 05:48:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.033 05:48:00 -- target/dif.sh@30 -- # for sub in "$@" 00:32:57.033 05:48:00 -- target/dif.sh@31 -- # create_subsystem 2 00:32:57.033 05:48:00 -- target/dif.sh@18 -- # local sub_id=2 00:32:57.033 05:48:00 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:32:57.033 05:48:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.033 05:48:00 -- common/autotest_common.sh@10 -- # set +x 00:32:57.033 bdev_null2 00:32:57.033 05:48:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.033 05:48:00 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:32:57.033 05:48:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.033 05:48:00 -- common/autotest_common.sh@10 -- # set +x 00:32:57.033 05:48:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.033 05:48:00 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:32:57.033 05:48:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.033 05:48:00 -- common/autotest_common.sh@10 -- # set +x 00:32:57.033 05:48:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.033 05:48:00 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:57.033 05:48:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.033 05:48:00 -- common/autotest_common.sh@10 -- # set +x 00:32:57.033 05:48:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.033 05:48:00 -- target/dif.sh@112 -- # fio /dev/fd/62 00:32:57.033 05:48:00 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:32:57.033 05:48:00 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:32:57.033 05:48:00 -- nvmf/common.sh@520 -- # config=() 00:32:57.033 05:48:00 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:57.033 05:48:00 -- nvmf/common.sh@520 -- # local subsystem config 00:32:57.033 05:48:00 -- common/autotest_common.sh@1345 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:57.033 05:48:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:57.033 05:48:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:57.033 { 00:32:57.033 "params": { 00:32:57.033 "name": "Nvme$subsystem", 00:32:57.033 "trtype": "$TEST_TRANSPORT", 00:32:57.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:57.033 "adrfam": "ipv4", 00:32:57.033 "trsvcid": "$NVMF_PORT", 00:32:57.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:57.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:57.033 "hdgst": ${hdgst:-false}, 00:32:57.033 "ddgst": ${ddgst:-false} 00:32:57.033 }, 00:32:57.033 "method": "bdev_nvme_attach_controller" 00:32:57.033 } 00:32:57.033 EOF 00:32:57.033 )") 00:32:57.033 05:48:00 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:32:57.034 05:48:00 -- target/dif.sh@82 -- # gen_fio_conf 00:32:57.034 05:48:00 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:57.034 05:48:00 -- target/dif.sh@54 -- # local file 00:32:57.034 05:48:00 -- common/autotest_common.sh@1328 -- # local sanitizers 00:32:57.034 05:48:00 -- target/dif.sh@56 -- # cat 00:32:57.034 05:48:00 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:57.034 05:48:00 -- common/autotest_common.sh@1330 -- # shift 00:32:57.034 05:48:00 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:32:57.034 05:48:00 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:32:57.034 05:48:00 -- nvmf/common.sh@542 -- # cat 00:32:57.034 05:48:00 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:57.034 05:48:00 -- common/autotest_common.sh@1334 -- # grep libasan 00:32:57.034 05:48:00 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:57.034 05:48:00 -- target/dif.sh@72 -- # (( file <= files )) 00:32:57.034 05:48:00 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:32:57.034 05:48:00 -- target/dif.sh@73 -- # cat 00:32:57.034 05:48:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:57.034 05:48:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:57.034 { 00:32:57.034 "params": { 00:32:57.034 "name": "Nvme$subsystem", 00:32:57.034 "trtype": "$TEST_TRANSPORT", 00:32:57.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:57.034 "adrfam": "ipv4", 00:32:57.034 "trsvcid": "$NVMF_PORT", 00:32:57.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:57.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:57.034 "hdgst": ${hdgst:-false}, 00:32:57.034 "ddgst": ${ddgst:-false} 00:32:57.034 }, 00:32:57.034 "method": "bdev_nvme_attach_controller" 00:32:57.034 } 00:32:57.034 EOF 00:32:57.034 )") 00:32:57.034 05:48:00 -- target/dif.sh@72 -- # (( file++ )) 00:32:57.034 05:48:00 -- target/dif.sh@72 -- # (( file <= files )) 00:32:57.034 05:48:00 -- target/dif.sh@73 -- # cat 00:32:57.034 05:48:00 -- nvmf/common.sh@542 -- # cat 00:32:57.034 05:48:00 -- target/dif.sh@72 -- # (( file++ )) 00:32:57.034 05:48:00 -- target/dif.sh@72 -- # (( file <= files )) 00:32:57.034 05:48:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:57.034 05:48:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:57.034 { 00:32:57.034 "params": { 00:32:57.034 "name": "Nvme$subsystem", 00:32:57.034 "trtype": "$TEST_TRANSPORT", 00:32:57.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:57.034 "adrfam": "ipv4", 00:32:57.034 "trsvcid": "$NVMF_PORT", 00:32:57.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:57.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:57.034 "hdgst": ${hdgst:-false}, 00:32:57.034 "ddgst": ${ddgst:-false} 00:32:57.034 }, 00:32:57.034 "method": "bdev_nvme_attach_controller" 00:32:57.034 } 00:32:57.034 EOF 00:32:57.034 )") 00:32:57.034 05:48:00 -- nvmf/common.sh@542 -- # cat 00:32:57.034 05:48:00 -- nvmf/common.sh@544 -- # jq . 00:32:57.034 05:48:00 -- nvmf/common.sh@545 -- # IFS=, 00:32:57.034 05:48:00 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:57.034 "params": { 00:32:57.034 "name": "Nvme0", 00:32:57.034 "trtype": "tcp", 00:32:57.034 "traddr": "10.0.0.2", 00:32:57.034 "adrfam": "ipv4", 00:32:57.034 "trsvcid": "4420", 00:32:57.034 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:57.034 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:57.034 "hdgst": false, 00:32:57.034 "ddgst": false 00:32:57.034 }, 00:32:57.034 "method": "bdev_nvme_attach_controller" 00:32:57.034 },{ 00:32:57.034 "params": { 00:32:57.034 "name": "Nvme1", 00:32:57.034 "trtype": "tcp", 00:32:57.034 "traddr": "10.0.0.2", 00:32:57.034 "adrfam": "ipv4", 00:32:57.034 "trsvcid": "4420", 00:32:57.034 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:57.034 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:57.034 "hdgst": false, 00:32:57.034 "ddgst": false 00:32:57.034 }, 00:32:57.034 "method": "bdev_nvme_attach_controller" 00:32:57.034 },{ 00:32:57.034 "params": { 00:32:57.034 "name": "Nvme2", 00:32:57.034 "trtype": "tcp", 00:32:57.034 "traddr": "10.0.0.2", 00:32:57.034 "adrfam": "ipv4", 00:32:57.034 "trsvcid": "4420", 00:32:57.034 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:57.034 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:57.034 "hdgst": false, 00:32:57.034 "ddgst": false 00:32:57.034 }, 00:32:57.034 "method": "bdev_nvme_attach_controller" 00:32:57.034 }' 00:32:57.034 05:48:00 -- common/autotest_common.sh@1334 -- # asan_lib= 00:32:57.034 05:48:00 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:32:57.034 05:48:00 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:32:57.034 05:48:00 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:57.034 05:48:00 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:32:57.034 05:48:00 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:32:57.313 05:48:00 -- common/autotest_common.sh@1334 -- # asan_lib= 00:32:57.313 05:48:00 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:32:57.313 05:48:00 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:57.313 05:48:00 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:57.577 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:57.577 ... 00:32:57.577 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:57.577 ... 00:32:57.577 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:57.577 ... 00:32:57.577 fio-3.35 00:32:57.577 Starting 24 threads 00:32:57.577 EAL: No free 2048 kB hugepages reported on node 1 00:32:58.518 [2024-12-07 05:48:01.471705] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:32:58.518 [2024-12-07 05:48:01.471752] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:08.517 00:33:08.517 filename0: (groupid=0, jobs=1): err= 0: pid=2057578: Sat Dec 7 05:48:11 2024 00:33:08.517 read: IOPS=524, BW=2097KiB/s (2147kB/s)(20.5MiB/10010msec) 00:33:08.517 slat (nsec): min=5448, max=79229, avg=13451.70, stdev=10799.12 00:33:08.517 clat (usec): min=7094, max=36688, avg=30410.42, stdev=2438.79 00:33:08.517 lat (usec): min=7110, max=36697, avg=30423.88, stdev=2438.40 00:33:08.517 clat percentiles (usec): 00:33:08.517 | 1.00th=[13698], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:33:08.517 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:33:08.517 | 70.00th=[31065], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:33:08.517 | 99.00th=[32375], 99.50th=[33817], 99.90th=[36439], 99.95th=[36439], 00:33:08.517 | 99.99th=[36439] 00:33:08.517 bw ( KiB/s): min= 2043, max= 2304, per=4.18%, avg=2094.89, stdev=75.32, samples=19 00:33:08.517 iops : min= 510, max= 576, avg=523.68, stdev=18.86, samples=19 00:33:08.517 lat (msec) : 10=0.61%, 20=0.61%, 50=98.78% 00:33:08.517 cpu : usr=99.12%, sys=0.57%, ctx=13, majf=0, minf=65 00:33:08.517 IO depths : 1=5.6%, 2=11.8%, 4=24.8%, 8=50.9%, 16=6.9%, 32=0.0%, >=64=0.0% 00:33:08.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.517 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.517 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:08.517 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:08.517 filename0: (groupid=0, jobs=1): err= 0: pid=2057579: Sat Dec 7 05:48:11 2024 00:33:08.517 read: IOPS=531, BW=2125KiB/s (2176kB/s)(20.8MiB/10006msec) 00:33:08.517 slat (usec): min=5, max=110, avg=16.69, stdev=15.07 00:33:08.517 clat (usec): min=6412, max=36430, avg=29992.00, stdev=3118.75 00:33:08.517 lat (usec): min=6428, max=36437, avg=30008.69, stdev=3120.56 00:33:08.517 clat percentiles (usec): 00:33:08.517 | 1.00th=[15139], 5.00th=[26608], 10.00th=[29754], 20.00th=[30278], 00:33:08.517 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:33:08.517 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:33:08.517 | 99.00th=[32637], 99.50th=[34866], 99.90th=[36439], 99.95th=[36439], 00:33:08.517 | 99.99th=[36439] 00:33:08.517 bw ( KiB/s): min= 2043, max= 2584, per=4.25%, avg=2129.58, stdev=133.79, samples=19 00:33:08.517 iops : min= 510, max= 646, avg=532.32, stdev=33.42, samples=19 00:33:08.517 lat (msec) : 10=0.49%, 20=3.24%, 50=96.27% 00:33:08.517 cpu : usr=98.87%, sys=0.78%, ctx=53, majf=0, minf=60 00:33:08.517 IO depths : 1=5.4%, 2=11.3%, 4=23.8%, 8=52.3%, 16=7.1%, 32=0.0%, >=64=0.0% 00:33:08.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.517 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.517 issued rwts: total=5315,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:08.517 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:08.517 filename0: (groupid=0, jobs=1): err= 0: pid=2057580: Sat Dec 7 05:48:11 2024 00:33:08.517 read: IOPS=519, BW=2076KiB/s (2126kB/s)(20.3MiB/10010msec) 00:33:08.517 slat (usec): min=5, max=100, avg=31.85, stdev=16.91 00:33:08.517 clat (usec): min=20048, max=53606, avg=30518.73, stdev=1364.54 00:33:08.517 lat (usec): min=20060, max=53627, avg=30550.58, stdev=1364.62 00:33:08.517 clat percentiles (usec): 00:33:08.517 | 1.00th=[26346], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:33:08.517 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:33:08.517 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31065], 95.00th=[31327], 00:33:08.517 | 99.00th=[33817], 99.50th=[37487], 99.90th=[45876], 99.95th=[45876], 00:33:08.517 | 99.99th=[53740] 00:33:08.517 bw ( KiB/s): min= 1888, max= 2176, per=4.14%, avg=2073.47, stdev=72.73, samples=19 00:33:08.517 iops : min= 472, max= 544, avg=518.37, stdev=18.18, samples=19 00:33:08.517 lat (msec) : 50=99.96%, 100=0.04% 00:33:08.517 cpu : usr=98.93%, sys=0.74%, ctx=58, majf=0, minf=43 00:33:08.517 IO depths : 1=6.1%, 2=12.2%, 4=24.6%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:33:08.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.517 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.517 issued rwts: total=5196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:08.517 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:08.517 filename0: (groupid=0, jobs=1): err= 0: pid=2057581: Sat Dec 7 05:48:11 2024 00:33:08.517 read: IOPS=520, BW=2082KiB/s (2132kB/s)(20.4MiB/10016msec) 00:33:08.517 slat (usec): min=5, max=106, avg=18.05, stdev=17.05 00:33:08.517 clat (usec): min=10904, max=54315, avg=30570.28, stdev=1905.72 00:33:08.517 lat (usec): min=10909, max=54321, avg=30588.33, stdev=1906.39 00:33:08.517 clat percentiles (usec): 00:33:08.517 | 1.00th=[25297], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:33:08.517 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:33:08.517 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:33:08.517 | 99.00th=[34341], 99.50th=[34866], 99.90th=[50070], 99.95th=[50070], 00:33:08.517 | 99.99th=[54264] 00:33:08.517 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2079.75, stdev=56.43, samples=20 00:33:08.517 iops : min= 512, max= 544, avg=519.90, stdev=14.04, samples=20 00:33:08.517 lat (msec) : 20=0.54%, 50=99.31%, 100=0.15% 00:33:08.517 cpu : usr=98.59%, sys=0.86%, ctx=154, majf=0, minf=47 00:33:08.517 IO depths : 1=6.0%, 2=12.2%, 4=24.8%, 8=50.5%, 16=6.5%, 32=0.0%, >=64=0.0% 00:33:08.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.517 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.517 issued rwts: total=5214,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:08.517 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:08.517 filename0: (groupid=0, jobs=1): err= 0: pid=2057582: Sat Dec 7 05:48:11 2024 00:33:08.517 read: IOPS=518, BW=2075KiB/s (2125kB/s)(20.3MiB/10005msec) 00:33:08.517 slat (usec): min=5, max=115, avg=32.30, stdev=19.99 00:33:08.517 clat (usec): min=10042, max=52777, avg=30528.20, stdev=2508.45 00:33:08.517 lat (usec): min=10048, max=52786, avg=30560.50, stdev=2509.24 00:33:08.517 clat percentiles (usec): 00:33:08.517 | 1.00th=[19006], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:33:08.517 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:33:08.517 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:33:08.517 | 99.00th=[43254], 99.50th=[49021], 99.90th=[52691], 99.95th=[52691], 00:33:08.517 | 99.99th=[52691] 00:33:08.517 bw ( KiB/s): min= 1920, max= 2176, per=4.13%, avg=2070.47, stdev=66.04, samples=19 00:33:08.517 iops : min= 480, max= 544, avg=517.58, stdev=16.45, samples=19 00:33:08.517 lat (msec) : 20=1.00%, 50=98.88%, 100=0.12% 00:33:08.518 cpu : usr=98.71%, sys=0.79%, ctx=55, majf=0, minf=39 00:33:08.518 IO depths : 1=5.8%, 2=11.7%, 4=24.2%, 8=51.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:33:08.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.518 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.518 issued rwts: total=5190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:08.518 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:08.518 filename0: (groupid=0, jobs=1): err= 0: pid=2057583: Sat Dec 7 05:48:11 2024 00:33:08.518 read: IOPS=520, BW=2082KiB/s (2132kB/s)(20.3MiB/10004msec) 00:33:08.518 slat (nsec): min=5529, max=96129, avg=28307.06, stdev=16534.23 00:33:08.518 clat (usec): min=9162, max=45223, avg=30491.85, stdev=2371.17 00:33:08.518 lat (usec): min=9168, max=45250, avg=30520.15, stdev=2370.79 00:33:08.518 clat percentiles (usec): 00:33:08.518 | 1.00th=[21890], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:33:08.518 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:33:08.518 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:33:08.518 | 99.00th=[38011], 99.50th=[44303], 99.90th=[44827], 99.95th=[44827], 00:33:08.518 | 99.99th=[45351] 00:33:08.518 bw ( KiB/s): min= 1920, max= 2176, per=4.13%, avg=2070.42, stdev=75.51, samples=19 00:33:08.518 iops : min= 480, max= 544, avg=517.53, stdev=18.84, samples=19 00:33:08.518 lat (msec) : 10=0.31%, 20=0.65%, 50=99.04% 00:33:08.518 cpu : usr=98.92%, sys=0.77%, ctx=15, majf=0, minf=49 00:33:08.518 IO depths : 1=4.5%, 2=10.5%, 4=24.2%, 8=52.6%, 16=8.1%, 32=0.0%, >=64=0.0% 00:33:08.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.518 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.518 issued rwts: total=5206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:08.518 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:08.518 filename0: (groupid=0, jobs=1): err= 0: pid=2057584: Sat Dec 7 05:48:11 2024 00:33:08.518 read: IOPS=521, BW=2086KiB/s (2136kB/s)(20.4MiB/10004msec) 00:33:08.518 slat (usec): min=5, max=103, avg=30.17, stdev=15.14 00:33:08.518 clat (usec): min=12458, max=34054, avg=30421.53, stdev=1393.40 00:33:08.518 lat (usec): min=12470, max=34110, avg=30451.69, stdev=1394.49 00:33:08.518 clat percentiles (usec): 00:33:08.518 | 1.00th=[29230], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:33:08.518 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:33:08.518 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31327], 00:33:08.518 | 99.00th=[31851], 99.50th=[31851], 99.90th=[33817], 99.95th=[33817], 00:33:08.518 | 99.99th=[33817] 00:33:08.518 bw ( KiB/s): min= 2043, max= 2176, per=4.16%, avg=2081.37, stdev=57.52, samples=19 00:33:08.518 iops : min= 510, max= 544, avg=520.26, stdev=14.34, samples=19 00:33:08.518 lat (msec) : 20=0.61%, 50=99.39% 00:33:08.518 cpu : usr=98.52%, sys=0.95%, ctx=110, majf=0, minf=41 00:33:08.518 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:08.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.518 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.518 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:08.518 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:08.518 filename0: (groupid=0, jobs=1): err= 0: pid=2057585: Sat Dec 7 05:48:11 2024 00:33:08.518 read: IOPS=521, BW=2084KiB/s (2134kB/s)(20.4MiB/10007msec) 00:33:08.518 slat (nsec): min=5464, max=91835, avg=17289.61, stdev=12912.62 00:33:08.518 clat (usec): min=9082, max=48145, avg=30575.68, stdev=2099.94 00:33:08.518 lat (usec): min=9088, max=48165, avg=30592.97, stdev=2099.52 00:33:08.518 clat percentiles (usec): 00:33:08.518 | 1.00th=[25297], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:33:08.518 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:33:08.518 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:33:08.518 | 99.00th=[32375], 99.50th=[43254], 99.90th=[47973], 99.95th=[47973], 00:33:08.518 | 99.99th=[47973] 00:33:08.518 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2074.42, stdev=62.63, samples=19 00:33:08.518 iops : min= 480, max= 544, avg=518.53, stdev=15.69, samples=19 00:33:08.518 lat (msec) : 10=0.27%, 20=0.58%, 50=99.16% 00:33:08.518 cpu : usr=98.25%, sys=1.04%, ctx=196, majf=0, minf=42 00:33:08.518 IO depths : 1=1.0%, 2=7.3%, 4=25.0%, 8=55.2%, 16=11.4%, 32=0.0%, >=64=0.0% 00:33:08.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.518 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.518 issued rwts: total=5214,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:08.518 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:08.518 filename1: (groupid=0, jobs=1): err= 0: pid=2057586: Sat Dec 7 05:48:11 2024 00:33:08.518 read: IOPS=519, BW=2077KiB/s (2127kB/s)(20.3MiB/10013msec) 00:33:08.518 slat (usec): min=5, max=109, avg=29.86, stdev=17.63 00:33:08.518 clat (usec): min=19355, max=37188, avg=30521.90, stdev=919.61 00:33:08.518 lat (usec): min=19364, max=37222, avg=30551.76, stdev=920.40 00:33:08.518 clat percentiles (usec): 00:33:08.518 | 1.00th=[28967], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:33:08.518 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:33:08.518 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31327], 00:33:08.518 | 99.00th=[33817], 99.50th=[34341], 99.90th=[36439], 99.95th=[36439], 00:33:08.518 | 99.99th=[36963] 00:33:08.518 bw ( KiB/s): min= 2043, max= 2176, per=4.14%, avg=2074.68, stdev=53.76, samples=19 00:33:08.518 iops : min= 510, max= 544, avg=518.63, stdev=13.47, samples=19 00:33:08.518 lat (msec) : 20=0.04%, 50=99.96% 00:33:08.518 cpu : usr=98.65%, sys=0.84%, ctx=170, majf=0, minf=40 00:33:08.518 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:08.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.518 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.518 issued rwts: total=5200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:08.518 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:08.518 filename1: (groupid=0, jobs=1): err= 0: pid=2057587: Sat Dec 7 05:48:11 2024 00:33:08.518 read: IOPS=524, BW=2100KiB/s (2150kB/s)(20.6MiB/10048msec) 00:33:08.518 slat (nsec): min=5523, max=96572, avg=16159.26, stdev=13784.30 00:33:08.518 clat (usec): min=8604, max=52777, avg=30358.66, stdev=4460.90 00:33:08.518 lat (usec): min=8609, max=52783, avg=30374.82, stdev=4461.35 00:33:08.518 clat percentiles (usec): 00:33:08.518 | 1.00th=[17695], 5.00th=[22414], 10.00th=[25035], 20.00th=[29754], 00:33:08.518 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:33:08.518 | 70.00th=[31065], 80.00th=[31327], 90.00th=[33424], 95.00th=[36963], 00:33:08.518 | 99.00th=[47449], 99.50th=[49546], 99.90th=[51119], 99.95th=[51119], 00:33:08.518 | 99.99th=[52691] 00:33:08.518 bw ( KiB/s): min= 1936, max= 2231, per=4.20%, avg=2104.65, stdev=72.35, samples=20 00:33:08.518 iops : min= 484, max= 557, avg=526.05, stdev=18.11, samples=20 00:33:08.518 lat (msec) : 10=0.11%, 20=1.92%, 50=97.71%, 100=0.27% 00:33:08.518 cpu : usr=99.05%, sys=0.63%, ctx=15, majf=0, minf=38 00:33:08.518 IO depths : 1=0.1%, 2=0.7%, 4=4.3%, 8=78.5%, 16=16.3%, 32=0.0%, >=64=0.0% 00:33:08.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.518 complete : 0=0.0%, 4=89.7%, 8=8.5%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.518 issued rwts: total=5274,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:08.518 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:08.518 filename1: (groupid=0, jobs=1): err= 0: pid=2057588: Sat Dec 7 05:48:11 2024 00:33:08.518 read: IOPS=521, BW=2086KiB/s (2136kB/s)(20.4MiB/10003msec) 00:33:08.518 slat (usec): min=5, max=101, avg=29.02, stdev=15.71 00:33:08.518 clat (usec): min=11465, max=35339, avg=30423.39, stdev=1601.01 00:33:08.519 lat (usec): min=11475, max=35360, avg=30452.41, stdev=1601.80 00:33:08.519 clat percentiles (usec): 00:33:08.519 | 1.00th=[23200], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:33:08.519 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:33:08.519 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31327], 00:33:08.519 | 99.00th=[31851], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:33:08.519 | 99.99th=[35390] 00:33:08.519 bw ( KiB/s): min= 2043, max= 2180, per=4.16%, avg=2081.58, stdev=57.89, samples=19 00:33:08.519 iops : min= 510, max= 545, avg=520.32, stdev=14.44, samples=19 00:33:08.519 lat (msec) : 20=0.61%, 50=99.39% 00:33:08.519 cpu : usr=98.62%, sys=0.97%, ctx=116, majf=0, minf=56 00:33:08.519 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:08.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.519 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.519 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:08.519 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:08.519 filename1: (groupid=0, jobs=1): err= 0: pid=2057589: Sat Dec 7 05:48:11 2024 00:33:08.519 read: IOPS=520, BW=2081KiB/s (2131kB/s)(20.3MiB/10007msec) 00:33:08.519 slat (usec): min=5, max=109, avg=21.11, stdev=15.30 00:33:08.519 clat (usec): min=8361, max=51586, avg=30561.37, stdev=1715.40 00:33:08.519 lat (usec): min=8368, max=51592, avg=30582.49, stdev=1716.53 00:33:08.519 clat percentiles (usec): 00:33:08.519 | 1.00th=[24773], 5.00th=[29492], 10.00th=[30016], 20.00th=[30278], 00:33:08.519 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:33:08.519 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:33:08.519 | 99.00th=[35914], 99.50th=[36963], 99.90th=[39060], 99.95th=[51643], 00:33:08.519 | 99.99th=[51643] 00:33:08.519 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2077.21, stdev=65.04, samples=19 00:33:08.519 iops : min= 480, max= 544, avg=519.26, stdev=16.21, samples=19 00:33:08.519 lat (msec) : 10=0.08%, 20=0.54%, 50=99.31%, 100=0.08% 00:33:08.519 cpu : usr=98.82%, sys=0.88%, ctx=13, majf=0, minf=47 00:33:08.519 IO depths : 1=4.9%, 2=11.0%, 4=24.5%, 8=51.9%, 16=7.6%, 32=0.0%, >=64=0.0% 00:33:08.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.519 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.519 issued rwts: total=5206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:08.519 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:08.519 filename1: (groupid=0, jobs=1): err= 0: pid=2057590: Sat Dec 7 05:48:11 2024 00:33:08.519 read: IOPS=520, BW=2081KiB/s (2130kB/s)(20.3MiB/10011msec) 00:33:08.519 slat (nsec): min=5530, max=84931, avg=24134.30, stdev=13778.68 00:33:08.519 clat (usec): min=12403, max=43316, avg=30542.09, stdev=1910.12 00:33:08.519 lat (usec): min=12409, max=43333, avg=30566.23, stdev=1911.29 00:33:08.519 clat percentiles (usec): 00:33:08.519 | 1.00th=[21103], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:33:08.519 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:33:08.519 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:33:08.519 | 99.00th=[40633], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:33:08.519 | 99.99th=[43254] 00:33:08.519 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2074.68, stdev=67.17, samples=19 00:33:08.519 iops : min= 480, max= 544, avg=518.63, stdev=16.81, samples=19 00:33:08.519 lat (msec) : 20=0.71%, 50=99.29% 00:33:08.519 cpu : usr=98.66%, sys=0.89%, ctx=40, majf=0, minf=40 00:33:08.519 IO depths : 1=5.1%, 2=11.3%, 4=24.8%, 8=51.4%, 16=7.4%, 32=0.0%, >=64=0.0% 00:33:08.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.519 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.519 issued rwts: total=5207,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:08.519 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:08.519 filename1: (groupid=0, jobs=1): err= 0: pid=2057591: Sat Dec 7 05:48:11 2024 00:33:08.519 read: IOPS=521, BW=2088KiB/s (2138kB/s)(20.4MiB/10024msec) 00:33:08.519 slat (usec): min=5, max=103, avg=17.02, stdev=12.93 00:33:08.519 clat (usec): min=9754, max=42294, avg=30513.02, stdev=1790.32 00:33:08.519 lat (usec): min=9760, max=42311, avg=30530.05, stdev=1791.21 00:33:08.519 clat percentiles (usec): 00:33:08.519 | 1.00th=[20055], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:33:08.519 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:33:08.519 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:33:08.519 | 99.00th=[32113], 99.50th=[37487], 99.90th=[41681], 99.95th=[41681], 00:33:08.519 | 99.99th=[42206] 00:33:08.519 bw ( KiB/s): min= 1920, max= 2352, per=4.17%, avg=2088.55, stdev=91.15, samples=20 00:33:08.519 iops : min= 480, max= 588, avg=522.10, stdev=22.81, samples=20 00:33:08.519 lat (msec) : 10=0.11%, 20=0.84%, 50=99.04% 00:33:08.519 cpu : usr=98.19%, sys=1.15%, ctx=109, majf=0, minf=41 00:33:08.519 IO depths : 1=5.5%, 2=11.7%, 4=24.7%, 8=51.1%, 16=7.0%, 32=0.0%, >=64=0.0% 00:33:08.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.519 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.519 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:08.519 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:08.519 filename1: (groupid=0, jobs=1): err= 0: pid=2057592: Sat Dec 7 05:48:11 2024 00:33:08.519 read: IOPS=591, BW=2367KiB/s (2424kB/s)(23.2MiB/10016msec) 00:33:08.519 slat (nsec): min=2805, max=87147, avg=7446.90, stdev=4616.47 00:33:08.519 clat (usec): min=590, max=56278, avg=26980.16, stdev=6343.51 00:33:08.519 lat (usec): min=597, max=56284, avg=26987.61, stdev=6344.25 00:33:08.519 clat percentiles (usec): 00:33:08.519 | 1.00th=[ 1565], 5.00th=[17695], 10.00th=[19268], 20.00th=[21103], 00:33:08.519 | 30.00th=[24511], 40.00th=[30016], 50.00th=[30278], 60.00th=[30540], 00:33:08.519 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31327], 00:33:08.519 | 99.00th=[33162], 99.50th=[34341], 99.90th=[55837], 99.95th=[55837], 00:33:08.519 | 99.99th=[56361] 00:33:08.519 bw ( KiB/s): min= 2048, max= 4072, per=4.75%, avg=2380.53, stdev=535.10, samples=19 00:33:08.519 iops : min= 512, max= 1018, avg=595.05, stdev=133.81, samples=19 00:33:08.519 lat (usec) : 750=0.02% 00:33:08.519 lat (msec) : 2=1.28%, 4=0.67%, 10=1.08%, 20=10.07%, 50=86.70% 00:33:08.519 lat (msec) : 100=0.17% 00:33:08.519 cpu : usr=98.91%, sys=0.79%, ctx=15, majf=0, minf=88 00:33:08.519 IO depths : 1=3.9%, 2=8.0%, 4=18.2%, 8=61.0%, 16=8.8%, 32=0.0%, >=64=0.0% 00:33:08.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.519 complete : 0=0.0%, 4=92.2%, 8=2.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.519 issued rwts: total=5927,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:08.519 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:08.519 filename1: (groupid=0, jobs=1): err= 0: pid=2057593: Sat Dec 7 05:48:11 2024 00:33:08.519 read: IOPS=517, BW=2071KiB/s (2121kB/s)(20.3MiB/10022msec) 00:33:08.519 slat (usec): min=5, max=106, avg=27.45, stdev=18.92 00:33:08.519 clat (usec): min=8299, max=56226, avg=30644.24, stdev=2367.10 00:33:08.519 lat (usec): min=8305, max=56257, avg=30671.69, stdev=2367.41 00:33:08.519 clat percentiles (usec): 00:33:08.519 | 1.00th=[26870], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:33:08.519 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:33:08.519 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:33:08.519 | 99.00th=[36439], 99.50th=[52691], 99.90th=[56361], 99.95th=[56361], 00:33:08.519 | 99.99th=[56361] 00:33:08.519 bw ( KiB/s): min= 1795, max= 2176, per=4.13%, avg=2069.95, stdev=86.04, samples=20 00:33:08.519 iops : min= 448, max= 544, avg=517.45, stdev=21.64, samples=20 00:33:08.519 lat (msec) : 10=0.04%, 20=0.69%, 50=98.65%, 100=0.62% 00:33:08.519 cpu : usr=98.98%, sys=0.70%, ctx=18, majf=0, minf=52 00:33:08.519 IO depths : 1=5.9%, 2=12.1%, 4=24.7%, 8=50.7%, 16=6.6%, 32=0.0%, >=64=0.0% 00:33:08.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.520 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.520 issued rwts: total=5190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:08.520 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:08.520 filename2: (groupid=0, jobs=1): err= 0: pid=2057594: Sat Dec 7 05:48:11 2024 00:33:08.520 read: IOPS=522, BW=2091KiB/s (2141kB/s)(20.4MiB/10003msec) 00:33:08.520 slat (nsec): min=5632, max=96500, avg=26885.76, stdev=16863.11 00:33:08.520 clat (usec): min=9079, max=47754, avg=30366.08, stdev=2559.01 00:33:08.520 lat (usec): min=9085, max=47772, avg=30392.96, stdev=2559.07 00:33:08.520 clat percentiles (usec): 00:33:08.520 | 1.00th=[18482], 5.00th=[29492], 10.00th=[30016], 20.00th=[30016], 00:33:08.520 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:33:08.520 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:33:08.520 | 99.00th=[36439], 99.50th=[44303], 99.90th=[47449], 99.95th=[47973], 00:33:08.520 | 99.99th=[47973] 00:33:08.520 bw ( KiB/s): min= 1923, max= 2224, per=4.15%, avg=2079.84, stdev=72.45, samples=19 00:33:08.520 iops : min= 480, max= 556, avg=519.84, stdev=18.17, samples=19 00:33:08.520 lat (msec) : 10=0.31%, 20=1.22%, 50=98.47% 00:33:08.520 cpu : usr=98.63%, sys=0.80%, ctx=201, majf=0, minf=38 00:33:08.520 IO depths : 1=5.5%, 2=11.5%, 4=24.1%, 8=51.8%, 16=7.1%, 32=0.0%, >=64=0.0% 00:33:08.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.520 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.520 issued rwts: total=5228,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:08.520 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:08.520 filename2: (groupid=0, jobs=1): err= 0: pid=2057595: Sat Dec 7 05:48:11 2024 00:33:08.520 read: IOPS=511, BW=2045KiB/s (2094kB/s)(20.0MiB/10004msec) 00:33:08.520 slat (usec): min=5, max=116, avg=20.14, stdev=18.08 00:33:08.520 clat (usec): min=4236, max=57978, avg=31189.88, stdev=4737.81 00:33:08.520 lat (usec): min=4241, max=57985, avg=31210.03, stdev=4737.62 00:33:08.520 clat percentiles (usec): 00:33:08.520 | 1.00th=[16188], 5.00th=[25297], 10.00th=[29492], 20.00th=[30278], 00:33:08.520 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:33:08.520 | 70.00th=[31065], 80.00th=[31589], 90.00th=[34866], 95.00th=[40633], 00:33:08.520 | 99.00th=[50070], 99.50th=[51643], 99.90th=[53740], 99.95th=[57934], 00:33:08.520 | 99.99th=[57934] 00:33:08.520 bw ( KiB/s): min= 1920, max= 2176, per=4.06%, avg=2033.42, stdev=71.61, samples=19 00:33:08.520 iops : min= 480, max= 544, avg=508.32, stdev=17.87, samples=19 00:33:08.520 lat (msec) : 10=0.63%, 20=1.74%, 50=96.62%, 100=1.02% 00:33:08.520 cpu : usr=99.02%, sys=0.65%, ctx=37, majf=0, minf=88 00:33:08.520 IO depths : 1=0.9%, 2=1.9%, 4=6.2%, 8=75.6%, 16=15.3%, 32=0.0%, >=64=0.0% 00:33:08.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.520 complete : 0=0.0%, 4=90.1%, 8=7.7%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.520 issued rwts: total=5114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:08.520 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:08.520 filename2: (groupid=0, jobs=1): err= 0: pid=2057596: Sat Dec 7 05:48:11 2024 00:33:08.520 read: IOPS=519, BW=2078KiB/s (2127kB/s)(20.3MiB/10012msec) 00:33:08.520 slat (nsec): min=5529, max=95573, avg=16172.61, stdev=14171.32 00:33:08.520 clat (usec): min=18961, max=42466, avg=30673.52, stdev=1014.72 00:33:08.520 lat (usec): min=18970, max=42488, avg=30689.69, stdev=1013.42 00:33:08.520 clat percentiles (usec): 00:33:08.520 | 1.00th=[29230], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:33:08.520 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:33:08.520 | 70.00th=[31065], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:33:08.520 | 99.00th=[33817], 99.50th=[35914], 99.90th=[42206], 99.95th=[42206], 00:33:08.520 | 99.99th=[42206] 00:33:08.520 bw ( KiB/s): min= 1923, max= 2176, per=4.14%, avg=2074.84, stdev=68.26, samples=19 00:33:08.520 iops : min= 480, max= 544, avg=518.63, stdev=17.18, samples=19 00:33:08.520 lat (msec) : 20=0.08%, 50=99.92% 00:33:08.520 cpu : usr=98.98%, sys=0.71%, ctx=14, majf=0, minf=52 00:33:08.520 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:08.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.520 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.520 issued rwts: total=5200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:08.520 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:08.520 filename2: (groupid=0, jobs=1): err= 0: pid=2057597: Sat Dec 7 05:48:11 2024 00:33:08.520 read: IOPS=518, BW=2073KiB/s (2123kB/s)(20.3MiB/10007msec) 00:33:08.520 slat (usec): min=5, max=115, avg=19.64, stdev=18.64 00:33:08.520 clat (usec): min=9356, max=56160, avg=30767.71, stdev=4053.63 00:33:08.520 lat (usec): min=9372, max=56166, avg=30787.36, stdev=4053.58 00:33:08.520 clat percentiles (usec): 00:33:08.520 | 1.00th=[17171], 5.00th=[24249], 10.00th=[28443], 20.00th=[30016], 00:33:08.520 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:33:08.520 | 70.00th=[31065], 80.00th=[31327], 90.00th=[32375], 95.00th=[36963], 00:33:08.520 | 99.00th=[47973], 99.50th=[49546], 99.90th=[53740], 99.95th=[56361], 00:33:08.520 | 99.99th=[56361] 00:33:08.520 bw ( KiB/s): min= 1920, max= 2208, per=4.13%, avg=2068.53, stdev=67.35, samples=19 00:33:08.520 iops : min= 480, max= 552, avg=517.05, stdev=16.87, samples=19 00:33:08.520 lat (msec) : 10=0.19%, 20=1.27%, 50=98.15%, 100=0.39% 00:33:08.520 cpu : usr=99.07%, sys=0.62%, ctx=15, majf=0, minf=37 00:33:08.520 IO depths : 1=1.1%, 2=2.2%, 4=6.1%, 8=75.5%, 16=15.1%, 32=0.0%, >=64=0.0% 00:33:08.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.520 complete : 0=0.0%, 4=90.0%, 8=7.8%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.520 issued rwts: total=5186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:08.520 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:08.520 filename2: (groupid=0, jobs=1): err= 0: pid=2057598: Sat Dec 7 05:48:11 2024 00:33:08.520 read: IOPS=519, BW=2079KiB/s (2129kB/s)(20.3MiB/10005msec) 00:33:08.520 slat (usec): min=5, max=108, avg=33.18, stdev=17.23 00:33:08.520 clat (usec): min=14514, max=44700, avg=30476.27, stdev=1317.09 00:33:08.520 lat (usec): min=14533, max=44718, avg=30509.45, stdev=1317.03 00:33:08.520 clat percentiles (usec): 00:33:08.520 | 1.00th=[29230], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:33:08.520 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:33:08.520 | 70.00th=[30802], 80.00th=[30802], 90.00th=[31065], 95.00th=[31327], 00:33:08.520 | 99.00th=[31851], 99.50th=[33817], 99.90th=[44827], 99.95th=[44827], 00:33:08.520 | 99.99th=[44827] 00:33:08.520 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2074.68, stdev=68.12, samples=19 00:33:08.520 iops : min= 480, max= 544, avg=518.63, stdev=16.97, samples=19 00:33:08.520 lat (msec) : 20=0.31%, 50=99.69% 00:33:08.520 cpu : usr=99.00%, sys=0.69%, ctx=36, majf=0, minf=43 00:33:08.520 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:08.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.520 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.520 issued rwts: total=5200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:08.520 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:08.520 filename2: (groupid=0, jobs=1): err= 0: pid=2057599: Sat Dec 7 05:48:11 2024 00:33:08.520 read: IOPS=519, BW=2078KiB/s (2128kB/s)(20.3MiB/10011msec) 00:33:08.520 slat (nsec): min=5168, max=91949, avg=22881.59, stdev=17136.38 00:33:08.520 clat (usec): min=19133, max=50050, avg=30623.01, stdev=1068.92 00:33:08.521 lat (usec): min=19142, max=50064, avg=30645.89, stdev=1067.09 00:33:08.521 clat percentiles (usec): 00:33:08.521 | 1.00th=[29230], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:33:08.521 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:33:08.521 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:33:08.521 | 99.00th=[33424], 99.50th=[35914], 99.90th=[41681], 99.95th=[41681], 00:33:08.521 | 99.99th=[50070] 00:33:08.521 bw ( KiB/s): min= 1923, max= 2176, per=4.14%, avg=2074.84, stdev=68.26, samples=19 00:33:08.521 iops : min= 480, max= 544, avg=518.63, stdev=17.18, samples=19 00:33:08.521 lat (msec) : 20=0.12%, 50=99.87%, 100=0.02% 00:33:08.521 cpu : usr=98.28%, sys=1.00%, ctx=322, majf=0, minf=41 00:33:08.521 IO depths : 1=5.8%, 2=12.0%, 4=24.9%, 8=50.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:33:08.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.521 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.521 issued rwts: total=5200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:08.521 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:08.521 filename2: (groupid=0, jobs=1): err= 0: pid=2057600: Sat Dec 7 05:48:11 2024 00:33:08.521 read: IOPS=519, BW=2078KiB/s (2127kB/s)(20.3MiB/10012msec) 00:33:08.521 slat (usec): min=5, max=109, avg=18.03, stdev=17.06 00:33:08.521 clat (usec): min=21461, max=36633, avg=30658.30, stdev=1123.14 00:33:08.521 lat (usec): min=21470, max=36639, avg=30676.33, stdev=1123.40 00:33:08.521 clat percentiles (usec): 00:33:08.521 | 1.00th=[25560], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:33:08.521 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:33:08.521 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:33:08.521 | 99.00th=[34866], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:33:08.521 | 99.99th=[36439] 00:33:08.521 bw ( KiB/s): min= 2043, max= 2176, per=4.14%, avg=2074.68, stdev=53.76, samples=19 00:33:08.521 iops : min= 510, max= 544, avg=518.63, stdev=13.47, samples=19 00:33:08.521 lat (msec) : 50=100.00% 00:33:08.521 cpu : usr=98.80%, sys=0.82%, ctx=68, majf=0, minf=37 00:33:08.521 IO depths : 1=5.7%, 2=11.8%, 4=24.4%, 8=51.3%, 16=6.8%, 32=0.0%, >=64=0.0% 00:33:08.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.521 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.521 issued rwts: total=5200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:08.521 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:08.521 filename2: (groupid=0, jobs=1): err= 0: pid=2057601: Sat Dec 7 05:48:11 2024 00:33:08.521 read: IOPS=522, BW=2091KiB/s (2141kB/s)(20.4MiB/10003msec) 00:33:08.521 slat (usec): min=5, max=112, avg=29.15, stdev=17.21 00:33:08.521 clat (usec): min=8127, max=49994, avg=30365.82, stdev=2459.93 00:33:08.521 lat (usec): min=8133, max=50023, avg=30394.96, stdev=2461.22 00:33:08.521 clat percentiles (usec): 00:33:08.521 | 1.00th=[20055], 5.00th=[29492], 10.00th=[29754], 20.00th=[30016], 00:33:08.521 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:33:08.521 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:33:08.521 | 99.00th=[37487], 99.50th=[41157], 99.90th=[50070], 99.95th=[50070], 00:33:08.521 | 99.99th=[50070] 00:33:08.521 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2079.84, stdev=64.41, samples=19 00:33:08.521 iops : min= 480, max= 544, avg=519.84, stdev=16.18, samples=19 00:33:08.521 lat (msec) : 10=0.31%, 20=0.61%, 50=99.08% 00:33:08.521 cpu : usr=98.95%, sys=0.63%, ctx=143, majf=0, minf=40 00:33:08.521 IO depths : 1=2.6%, 2=8.2%, 4=22.9%, 8=56.0%, 16=10.3%, 32=0.0%, >=64=0.0% 00:33:08.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.521 complete : 0=0.0%, 4=93.8%, 8=0.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.521 issued rwts: total=5228,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:08.521 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:08.521 00:33:08.521 Run status group 0 (all jobs): 00:33:08.521 READ: bw=48.9MiB/s (51.3MB/s), 2045KiB/s-2367KiB/s (2094kB/s-2424kB/s), io=491MiB (515MB), run=10003-10048msec 00:33:08.782 05:48:11 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:33:08.782 05:48:11 -- target/dif.sh@43 -- # local sub 00:33:08.782 05:48:11 -- target/dif.sh@45 -- # for sub in "$@" 00:33:08.782 05:48:11 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:08.782 05:48:11 -- target/dif.sh@36 -- # local sub_id=0 00:33:08.782 05:48:11 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:08.782 05:48:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.782 05:48:11 -- common/autotest_common.sh@10 -- # set +x 00:33:08.782 05:48:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.782 05:48:11 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:08.782 05:48:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.782 05:48:11 -- common/autotest_common.sh@10 -- # set +x 00:33:08.782 05:48:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.782 05:48:11 -- target/dif.sh@45 -- # for sub in "$@" 00:33:08.782 05:48:11 -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:08.782 05:48:11 -- target/dif.sh@36 -- # local sub_id=1 00:33:08.782 05:48:11 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:08.782 05:48:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.782 05:48:11 -- common/autotest_common.sh@10 -- # set +x 00:33:08.782 05:48:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.782 05:48:11 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:08.782 05:48:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.782 05:48:11 -- common/autotest_common.sh@10 -- # set +x 00:33:08.782 05:48:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.782 05:48:11 -- target/dif.sh@45 -- # for sub in "$@" 00:33:08.782 05:48:11 -- target/dif.sh@46 -- # destroy_subsystem 2 00:33:08.782 05:48:11 -- target/dif.sh@36 -- # local sub_id=2 00:33:08.782 05:48:11 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:08.782 05:48:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.782 05:48:11 -- common/autotest_common.sh@10 -- # set +x 00:33:08.782 05:48:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.782 05:48:11 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:33:08.782 05:48:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.782 05:48:11 -- common/autotest_common.sh@10 -- # set +x 00:33:08.782 05:48:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.782 05:48:11 -- target/dif.sh@115 -- # NULL_DIF=1 00:33:08.782 05:48:11 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:33:08.782 05:48:11 -- target/dif.sh@115 -- # numjobs=2 00:33:08.782 05:48:11 -- target/dif.sh@115 -- # iodepth=8 00:33:08.782 05:48:11 -- target/dif.sh@115 -- # runtime=5 00:33:08.782 05:48:11 -- target/dif.sh@115 -- # files=1 00:33:08.782 05:48:11 -- target/dif.sh@117 -- # create_subsystems 0 1 00:33:08.782 05:48:11 -- target/dif.sh@28 -- # local sub 00:33:08.782 05:48:11 -- target/dif.sh@30 -- # for sub in "$@" 00:33:08.782 05:48:11 -- target/dif.sh@31 -- # create_subsystem 0 00:33:08.782 05:48:11 -- target/dif.sh@18 -- # local sub_id=0 00:33:08.782 05:48:11 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:08.782 05:48:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.782 05:48:11 -- common/autotest_common.sh@10 -- # set +x 00:33:08.782 bdev_null0 00:33:08.782 05:48:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.782 05:48:11 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:08.782 05:48:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.782 05:48:11 -- common/autotest_common.sh@10 -- # set +x 00:33:08.782 05:48:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.782 05:48:11 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:08.782 05:48:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.782 05:48:11 -- common/autotest_common.sh@10 -- # set +x 00:33:08.782 05:48:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.782 05:48:11 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:08.782 05:48:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.782 05:48:11 -- common/autotest_common.sh@10 -- # set +x 00:33:08.782 [2024-12-07 05:48:11.923386] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:08.782 05:48:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.782 05:48:11 -- target/dif.sh@30 -- # for sub in "$@" 00:33:08.782 05:48:11 -- target/dif.sh@31 -- # create_subsystem 1 00:33:08.782 05:48:11 -- target/dif.sh@18 -- # local sub_id=1 00:33:08.782 05:48:11 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:08.782 05:48:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.782 05:48:11 -- common/autotest_common.sh@10 -- # set +x 00:33:08.782 bdev_null1 00:33:08.782 05:48:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.782 05:48:11 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:08.782 05:48:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.782 05:48:11 -- common/autotest_common.sh@10 -- # set +x 00:33:08.782 05:48:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.782 05:48:11 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:08.782 05:48:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.782 05:48:11 -- common/autotest_common.sh@10 -- # set +x 00:33:08.782 05:48:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.782 05:48:11 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:08.782 05:48:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.782 05:48:11 -- common/autotest_common.sh@10 -- # set +x 00:33:08.782 05:48:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.782 05:48:11 -- target/dif.sh@118 -- # fio /dev/fd/62 00:33:08.782 05:48:11 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:08.782 05:48:11 -- common/autotest_common.sh@1345 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:08.782 05:48:11 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:33:08.782 05:48:11 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:08.782 05:48:11 -- common/autotest_common.sh@1328 -- # local sanitizers 00:33:08.782 05:48:11 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:08.782 05:48:11 -- common/autotest_common.sh@1330 -- # shift 00:33:08.782 05:48:11 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:33:08.782 05:48:11 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:33:08.782 05:48:11 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:33:08.782 05:48:11 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:08.782 05:48:11 -- target/dif.sh@82 -- # gen_fio_conf 00:33:08.782 05:48:11 -- nvmf/common.sh@520 -- # config=() 00:33:08.782 05:48:11 -- target/dif.sh@54 -- # local file 00:33:08.783 05:48:11 -- nvmf/common.sh@520 -- # local subsystem config 00:33:08.783 05:48:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:08.783 05:48:11 -- target/dif.sh@56 -- # cat 00:33:08.783 05:48:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:08.783 { 00:33:08.783 "params": { 00:33:08.783 "name": "Nvme$subsystem", 00:33:08.783 "trtype": "$TEST_TRANSPORT", 00:33:08.783 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:08.783 "adrfam": "ipv4", 00:33:08.783 "trsvcid": "$NVMF_PORT", 00:33:08.783 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:08.783 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:08.783 "hdgst": ${hdgst:-false}, 00:33:08.783 "ddgst": ${ddgst:-false} 00:33:08.783 }, 00:33:08.783 "method": "bdev_nvme_attach_controller" 00:33:08.783 } 00:33:08.783 EOF 00:33:08.783 )") 00:33:08.783 05:48:11 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:08.783 05:48:11 -- common/autotest_common.sh@1334 -- # grep libasan 00:33:08.783 05:48:11 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:33:08.783 05:48:11 -- nvmf/common.sh@542 -- # cat 00:33:08.783 05:48:11 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:08.783 05:48:11 -- target/dif.sh@72 -- # (( file <= files )) 00:33:08.783 05:48:11 -- target/dif.sh@73 -- # cat 00:33:08.783 05:48:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:08.783 05:48:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:08.783 { 00:33:08.783 "params": { 00:33:08.783 "name": "Nvme$subsystem", 00:33:08.783 "trtype": "$TEST_TRANSPORT", 00:33:08.783 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:08.783 "adrfam": "ipv4", 00:33:08.783 "trsvcid": "$NVMF_PORT", 00:33:08.783 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:08.783 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:08.783 "hdgst": ${hdgst:-false}, 00:33:08.783 "ddgst": ${ddgst:-false} 00:33:08.783 }, 00:33:08.783 "method": "bdev_nvme_attach_controller" 00:33:08.783 } 00:33:08.783 EOF 00:33:08.783 )") 00:33:08.783 05:48:11 -- target/dif.sh@72 -- # (( file++ )) 00:33:08.783 05:48:11 -- target/dif.sh@72 -- # (( file <= files )) 00:33:08.783 05:48:11 -- nvmf/common.sh@542 -- # cat 00:33:08.783 05:48:11 -- nvmf/common.sh@544 -- # jq . 00:33:08.783 05:48:11 -- nvmf/common.sh@545 -- # IFS=, 00:33:08.783 05:48:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:08.783 "params": { 00:33:08.783 "name": "Nvme0", 00:33:08.783 "trtype": "tcp", 00:33:08.783 "traddr": "10.0.0.2", 00:33:08.783 "adrfam": "ipv4", 00:33:08.783 "trsvcid": "4420", 00:33:08.783 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:08.783 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:08.783 "hdgst": false, 00:33:08.783 "ddgst": false 00:33:08.783 }, 00:33:08.783 "method": "bdev_nvme_attach_controller" 00:33:08.783 },{ 00:33:08.783 "params": { 00:33:08.783 "name": "Nvme1", 00:33:08.783 "trtype": "tcp", 00:33:08.783 "traddr": "10.0.0.2", 00:33:08.783 "adrfam": "ipv4", 00:33:08.783 "trsvcid": "4420", 00:33:08.783 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:08.783 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:08.783 "hdgst": false, 00:33:08.783 "ddgst": false 00:33:08.783 }, 00:33:08.783 "method": "bdev_nvme_attach_controller" 00:33:08.783 }' 00:33:08.783 05:48:11 -- common/autotest_common.sh@1334 -- # asan_lib= 00:33:08.783 05:48:11 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:33:08.783 05:48:11 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:33:08.783 05:48:11 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:08.783 05:48:11 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:33:08.783 05:48:11 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:33:09.201 05:48:12 -- common/autotest_common.sh@1334 -- # asan_lib= 00:33:09.201 05:48:12 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:33:09.201 05:48:12 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:09.201 05:48:12 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:09.201 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:09.201 ... 00:33:09.201 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:09.201 ... 00:33:09.201 fio-3.35 00:33:09.201 Starting 4 threads 00:33:09.493 EAL: No free 2048 kB hugepages reported on node 1 00:33:09.778 [2024-12-07 05:48:12.822777] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:09.778 [2024-12-07 05:48:12.822818] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:15.063 00:33:15.063 filename0: (groupid=0, jobs=1): err= 0: pid=2060276: Sat Dec 7 05:48:17 2024 00:33:15.063 read: IOPS=2143, BW=16.7MiB/s (17.6MB/s)(83.8MiB/5002msec) 00:33:15.063 slat (nsec): min=5355, max=58988, avg=7666.83, stdev=3648.63 00:33:15.063 clat (usec): min=1836, max=7024, avg=3712.47, stdev=549.44 00:33:15.063 lat (usec): min=1844, max=7056, avg=3720.13, stdev=548.88 00:33:15.063 clat percentiles (usec): 00:33:15.063 | 1.00th=[ 2769], 5.00th=[ 3163], 10.00th=[ 3326], 20.00th=[ 3392], 00:33:15.063 | 30.00th=[ 3490], 40.00th=[ 3589], 50.00th=[ 3621], 60.00th=[ 3621], 00:33:15.063 | 70.00th=[ 3654], 80.00th=[ 3851], 90.00th=[ 4228], 95.00th=[ 5211], 00:33:15.063 | 99.00th=[ 5735], 99.50th=[ 5800], 99.90th=[ 5997], 99.95th=[ 5997], 00:33:15.063 | 99.99th=[ 6915] 00:33:15.063 bw ( KiB/s): min=16016, max=18192, per=24.61%, avg=17145.60, stdev=888.96, samples=10 00:33:15.063 iops : min= 2002, max= 2274, avg=2143.20, stdev=111.12, samples=10 00:33:15.063 lat (msec) : 2=0.05%, 4=86.00%, 10=13.95% 00:33:15.063 cpu : usr=95.84%, sys=3.26%, ctx=108, majf=0, minf=9 00:33:15.063 IO depths : 1=0.1%, 2=0.1%, 4=70.8%, 8=29.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:15.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.063 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.063 issued rwts: total=10721,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:15.063 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:15.063 filename0: (groupid=0, jobs=1): err= 0: pid=2060277: Sat Dec 7 05:48:17 2024 00:33:15.063 read: IOPS=2173, BW=17.0MiB/s (17.8MB/s)(85.6MiB/5041msec) 00:33:15.063 slat (nsec): min=5356, max=57411, avg=8257.87, stdev=2827.06 00:33:15.063 clat (usec): min=1814, max=42222, avg=3657.40, stdev=1029.02 00:33:15.063 lat (usec): min=1820, max=42243, avg=3665.66, stdev=1029.05 00:33:15.063 clat percentiles (usec): 00:33:15.063 | 1.00th=[ 2933], 5.00th=[ 3163], 10.00th=[ 3326], 20.00th=[ 3392], 00:33:15.063 | 30.00th=[ 3458], 40.00th=[ 3589], 50.00th=[ 3621], 60.00th=[ 3621], 00:33:15.063 | 70.00th=[ 3654], 80.00th=[ 3752], 90.00th=[ 4047], 95.00th=[ 4359], 00:33:15.063 | 99.00th=[ 5145], 99.50th=[ 5407], 99.90th=[ 6718], 99.95th=[41157], 00:33:15.063 | 99.99th=[42206] 00:33:15.063 bw ( KiB/s): min=17104, max=18224, per=25.15%, avg=17526.60, stdev=322.29, samples=10 00:33:15.063 iops : min= 2138, max= 2278, avg=2190.80, stdev=40.29, samples=10 00:33:15.063 lat (msec) : 2=0.08%, 4=89.35%, 10=10.50%, 50=0.06% 00:33:15.063 cpu : usr=97.12%, sys=2.64%, ctx=6, majf=0, minf=9 00:33:15.063 IO depths : 1=0.1%, 2=0.1%, 4=66.4%, 8=33.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:15.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.063 complete : 0=0.0%, 4=97.1%, 8=2.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.063 issued rwts: total=10959,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:15.063 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:15.063 filename1: (groupid=0, jobs=1): err= 0: pid=2060278: Sat Dec 7 05:48:17 2024 00:33:15.063 read: IOPS=2365, BW=18.5MiB/s (19.4MB/s)(92.4MiB/5001msec) 00:33:15.063 slat (nsec): min=5349, max=50217, avg=6243.43, stdev=2666.89 00:33:15.063 clat (usec): min=1190, max=5608, avg=3363.93, stdev=575.37 00:33:15.063 lat (usec): min=1211, max=5614, avg=3370.17, stdev=575.60 00:33:15.063 clat percentiles (usec): 00:33:15.063 | 1.00th=[ 2442], 5.00th=[ 2573], 10.00th=[ 2737], 20.00th=[ 2835], 00:33:15.063 | 30.00th=[ 2999], 40.00th=[ 3195], 50.00th=[ 3392], 60.00th=[ 3523], 00:33:15.063 | 70.00th=[ 3621], 80.00th=[ 3654], 90.00th=[ 3851], 95.00th=[ 4686], 00:33:15.063 | 99.00th=[ 5145], 99.50th=[ 5211], 99.90th=[ 5473], 99.95th=[ 5604], 00:33:15.063 | 99.99th=[ 5604] 00:33:15.063 bw ( KiB/s): min=16640, max=20928, per=27.17%, avg=18930.30, stdev=1734.45, samples=10 00:33:15.063 iops : min= 2080, max= 2616, avg=2366.20, stdev=216.77, samples=10 00:33:15.063 lat (msec) : 2=0.22%, 4=91.62%, 10=8.16% 00:33:15.063 cpu : usr=97.50%, sys=2.26%, ctx=11, majf=0, minf=0 00:33:15.063 IO depths : 1=0.1%, 2=5.7%, 4=64.4%, 8=29.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:15.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.063 complete : 0=0.0%, 4=94.1%, 8=5.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.063 issued rwts: total=11831,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:15.063 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:15.063 filename1: (groupid=0, jobs=1): err= 0: pid=2060279: Sat Dec 7 05:48:17 2024 00:33:15.063 read: IOPS=2078, BW=16.2MiB/s (17.0MB/s)(81.2MiB/5001msec) 00:33:15.063 slat (nsec): min=5353, max=58766, avg=6363.71, stdev=2766.50 00:33:15.063 clat (usec): min=1326, max=6890, avg=3831.02, stdev=655.56 00:33:15.063 lat (usec): min=1346, max=6898, avg=3837.38, stdev=655.37 00:33:15.063 clat percentiles (usec): 00:33:15.063 | 1.00th=[ 2999], 5.00th=[ 3261], 10.00th=[ 3326], 20.00th=[ 3490], 00:33:15.063 | 30.00th=[ 3523], 40.00th=[ 3589], 50.00th=[ 3621], 60.00th=[ 3621], 00:33:15.063 | 70.00th=[ 3752], 80.00th=[ 3982], 90.00th=[ 5211], 95.00th=[ 5473], 00:33:15.063 | 99.00th=[ 5800], 99.50th=[ 5800], 99.90th=[ 6063], 99.95th=[ 6128], 00:33:15.063 | 99.99th=[ 6915] 00:33:15.063 bw ( KiB/s): min=16016, max=17248, per=23.68%, avg=16499.56, stdev=503.86, samples=9 00:33:15.063 iops : min= 2002, max= 2156, avg=2062.44, stdev=62.98, samples=9 00:33:15.063 lat (msec) : 2=0.06%, 4=81.05%, 10=18.89% 00:33:15.063 cpu : usr=97.52%, sys=2.26%, ctx=7, majf=0, minf=9 00:33:15.063 IO depths : 1=0.1%, 2=0.1%, 4=72.0%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:15.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.063 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.063 issued rwts: total=10396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:15.064 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:15.064 00:33:15.064 Run status group 0 (all jobs): 00:33:15.064 READ: bw=68.0MiB/s (71.4MB/s), 16.2MiB/s-18.5MiB/s (17.0MB/s-19.4MB/s), io=343MiB (360MB), run=5001-5041msec 00:33:15.064 05:48:18 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:33:15.064 05:48:18 -- target/dif.sh@43 -- # local sub 00:33:15.064 05:48:18 -- target/dif.sh@45 -- # for sub in "$@" 00:33:15.064 05:48:18 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:15.064 05:48:18 -- target/dif.sh@36 -- # local sub_id=0 00:33:15.064 05:48:18 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:15.064 05:48:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.064 05:48:18 -- common/autotest_common.sh@10 -- # set +x 00:33:15.064 05:48:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.064 05:48:18 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:15.064 05:48:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.064 05:48:18 -- common/autotest_common.sh@10 -- # set +x 00:33:15.064 05:48:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.064 05:48:18 -- target/dif.sh@45 -- # for sub in "$@" 00:33:15.064 05:48:18 -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:15.064 05:48:18 -- target/dif.sh@36 -- # local sub_id=1 00:33:15.064 05:48:18 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:15.064 05:48:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.064 05:48:18 -- common/autotest_common.sh@10 -- # set +x 00:33:15.064 05:48:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.064 05:48:18 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:15.064 05:48:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.064 05:48:18 -- common/autotest_common.sh@10 -- # set +x 00:33:15.064 05:48:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.064 00:33:15.064 real 0m24.376s 00:33:15.064 user 5m17.097s 00:33:15.064 sys 0m4.295s 00:33:15.064 05:48:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:33:15.064 05:48:18 -- common/autotest_common.sh@10 -- # set +x 00:33:15.064 ************************************ 00:33:15.064 END TEST fio_dif_rand_params 00:33:15.064 ************************************ 00:33:15.064 05:48:18 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:33:15.064 05:48:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:15.064 05:48:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:15.064 05:48:18 -- common/autotest_common.sh@10 -- # set +x 00:33:15.064 ************************************ 00:33:15.064 START TEST fio_dif_digest 00:33:15.064 ************************************ 00:33:15.064 05:48:18 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:33:15.064 05:48:18 -- target/dif.sh@123 -- # local NULL_DIF 00:33:15.064 05:48:18 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:33:15.064 05:48:18 -- target/dif.sh@125 -- # local hdgst ddgst 00:33:15.064 05:48:18 -- target/dif.sh@127 -- # NULL_DIF=3 00:33:15.064 05:48:18 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:33:15.064 05:48:18 -- target/dif.sh@127 -- # numjobs=3 00:33:15.064 05:48:18 -- target/dif.sh@127 -- # iodepth=3 00:33:15.064 05:48:18 -- target/dif.sh@127 -- # runtime=10 00:33:15.064 05:48:18 -- target/dif.sh@128 -- # hdgst=true 00:33:15.064 05:48:18 -- target/dif.sh@128 -- # ddgst=true 00:33:15.064 05:48:18 -- target/dif.sh@130 -- # create_subsystems 0 00:33:15.064 05:48:18 -- target/dif.sh@28 -- # local sub 00:33:15.064 05:48:18 -- target/dif.sh@30 -- # for sub in "$@" 00:33:15.064 05:48:18 -- target/dif.sh@31 -- # create_subsystem 0 00:33:15.064 05:48:18 -- target/dif.sh@18 -- # local sub_id=0 00:33:15.064 05:48:18 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:15.064 05:48:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.064 05:48:18 -- common/autotest_common.sh@10 -- # set +x 00:33:15.064 bdev_null0 00:33:15.064 05:48:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.064 05:48:18 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:15.064 05:48:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.064 05:48:18 -- common/autotest_common.sh@10 -- # set +x 00:33:15.064 05:48:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.064 05:48:18 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:15.064 05:48:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.064 05:48:18 -- common/autotest_common.sh@10 -- # set +x 00:33:15.064 05:48:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.064 05:48:18 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:15.064 05:48:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.064 05:48:18 -- common/autotest_common.sh@10 -- # set +x 00:33:15.064 [2024-12-07 05:48:18.243303] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:15.064 05:48:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.064 05:48:18 -- target/dif.sh@131 -- # fio /dev/fd/62 00:33:15.064 05:48:18 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:33:15.064 05:48:18 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:15.064 05:48:18 -- nvmf/common.sh@520 -- # config=() 00:33:15.064 05:48:18 -- nvmf/common.sh@520 -- # local subsystem config 00:33:15.064 05:48:18 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:15.064 05:48:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:15.064 05:48:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:15.064 { 00:33:15.064 "params": { 00:33:15.064 "name": "Nvme$subsystem", 00:33:15.064 "trtype": "$TEST_TRANSPORT", 00:33:15.064 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:15.064 "adrfam": "ipv4", 00:33:15.064 "trsvcid": "$NVMF_PORT", 00:33:15.064 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:15.064 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:15.064 "hdgst": ${hdgst:-false}, 00:33:15.064 "ddgst": ${ddgst:-false} 00:33:15.064 }, 00:33:15.064 "method": "bdev_nvme_attach_controller" 00:33:15.064 } 00:33:15.064 EOF 00:33:15.064 )") 00:33:15.064 05:48:18 -- common/autotest_common.sh@1345 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:15.064 05:48:18 -- target/dif.sh@82 -- # gen_fio_conf 00:33:15.064 05:48:18 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:33:15.064 05:48:18 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:15.064 05:48:18 -- target/dif.sh@54 -- # local file 00:33:15.064 05:48:18 -- common/autotest_common.sh@1328 -- # local sanitizers 00:33:15.064 05:48:18 -- target/dif.sh@56 -- # cat 00:33:15.064 05:48:18 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:15.064 05:48:18 -- common/autotest_common.sh@1330 -- # shift 00:33:15.064 05:48:18 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:33:15.064 05:48:18 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:33:15.064 05:48:18 -- nvmf/common.sh@542 -- # cat 00:33:15.064 05:48:18 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:15.064 05:48:18 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:15.064 05:48:18 -- common/autotest_common.sh@1334 -- # grep libasan 00:33:15.064 05:48:18 -- target/dif.sh@72 -- # (( file <= files )) 00:33:15.064 05:48:18 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:33:15.064 05:48:18 -- nvmf/common.sh@544 -- # jq . 00:33:15.064 05:48:18 -- nvmf/common.sh@545 -- # IFS=, 00:33:15.064 05:48:18 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:15.064 "params": { 00:33:15.064 "name": "Nvme0", 00:33:15.064 "trtype": "tcp", 00:33:15.064 "traddr": "10.0.0.2", 00:33:15.064 "adrfam": "ipv4", 00:33:15.064 "trsvcid": "4420", 00:33:15.064 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:15.064 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:15.064 "hdgst": true, 00:33:15.064 "ddgst": true 00:33:15.064 }, 00:33:15.064 "method": "bdev_nvme_attach_controller" 00:33:15.064 }' 00:33:15.064 05:48:18 -- common/autotest_common.sh@1334 -- # asan_lib= 00:33:15.064 05:48:18 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:33:15.064 05:48:18 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:33:15.064 05:48:18 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:15.064 05:48:18 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:33:15.064 05:48:18 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:33:15.350 05:48:18 -- common/autotest_common.sh@1334 -- # asan_lib= 00:33:15.350 05:48:18 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:33:15.350 05:48:18 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:15.350 05:48:18 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:15.614 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:15.614 ... 00:33:15.614 fio-3.35 00:33:15.614 Starting 3 threads 00:33:15.614 EAL: No free 2048 kB hugepages reported on node 1 00:33:15.875 [2024-12-07 05:48:19.099895] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:15.875 [2024-12-07 05:48:19.099946] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:28.119 00:33:28.119 filename0: (groupid=0, jobs=1): err= 0: pid=2061804: Sat Dec 7 05:48:29 2024 00:33:28.119 read: IOPS=226, BW=28.3MiB/s (29.7MB/s)(284MiB/10034msec) 00:33:28.119 slat (nsec): min=5728, max=46409, avg=7331.29, stdev=1827.51 00:33:28.119 clat (usec): min=8156, max=55741, avg=13251.33, stdev=3082.56 00:33:28.119 lat (usec): min=8162, max=55747, avg=13258.66, stdev=3082.53 00:33:28.119 clat percentiles (usec): 00:33:28.119 | 1.00th=[10290], 5.00th=[11338], 10.00th=[11731], 20.00th=[12256], 00:33:28.119 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13042], 60.00th=[13304], 00:33:28.119 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14222], 95.00th=[14746], 00:33:28.119 | 99.00th=[15664], 99.50th=[46924], 99.90th=[54789], 99.95th=[54789], 00:33:28.119 | 99.99th=[55837] 00:33:28.119 bw ( KiB/s): min=26624, max=31488, per=33.94%, avg=29017.60, stdev=1157.67, samples=20 00:33:28.119 iops : min= 208, max= 246, avg=226.70, stdev= 9.04, samples=20 00:33:28.119 lat (msec) : 10=0.88%, 20=98.59%, 50=0.09%, 100=0.44% 00:33:28.119 cpu : usr=95.72%, sys=4.07%, ctx=13, majf=0, minf=171 00:33:28.119 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:28.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:28.119 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:28.119 issued rwts: total=2270,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:28.119 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:28.119 filename0: (groupid=0, jobs=1): err= 0: pid=2061805: Sat Dec 7 05:48:29 2024 00:33:28.119 read: IOPS=220, BW=27.6MiB/s (28.9MB/s)(277MiB/10045msec) 00:33:28.119 slat (nsec): min=5731, max=34411, avg=6909.06, stdev=1889.91 00:33:28.119 clat (usec): min=8424, max=54711, avg=13582.35, stdev=2144.68 00:33:28.119 lat (usec): min=8430, max=54718, avg=13589.26, stdev=2144.71 00:33:28.119 clat percentiles (usec): 00:33:28.119 | 1.00th=[ 9765], 5.00th=[11731], 10.00th=[12256], 20.00th=[12780], 00:33:28.119 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13566], 60.00th=[13698], 00:33:28.119 | 70.00th=[13960], 80.00th=[14353], 90.00th=[14877], 95.00th=[15270], 00:33:28.119 | 99.00th=[16057], 99.50th=[16581], 99.90th=[54264], 99.95th=[54264], 00:33:28.119 | 99.99th=[54789] 00:33:28.119 bw ( KiB/s): min=26112, max=29440, per=33.11%, avg=28313.60, stdev=725.98, samples=20 00:33:28.119 iops : min= 204, max= 230, avg=221.20, stdev= 5.67, samples=20 00:33:28.119 lat (msec) : 10=1.22%, 20=98.55%, 50=0.05%, 100=0.18% 00:33:28.119 cpu : usr=95.18%, sys=4.59%, ctx=24, majf=0, minf=136 00:33:28.119 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:28.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:28.119 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:28.119 issued rwts: total=2214,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:28.119 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:28.119 filename0: (groupid=0, jobs=1): err= 0: pid=2061806: Sat Dec 7 05:48:29 2024 00:33:28.119 read: IOPS=221, BW=27.7MiB/s (29.1MB/s)(278MiB/10046msec) 00:33:28.119 slat (nsec): min=5734, max=32458, avg=7773.98, stdev=2106.77 00:33:28.119 clat (usec): min=8702, max=54103, avg=13503.97, stdev=2111.85 00:33:28.119 lat (usec): min=8711, max=54109, avg=13511.74, stdev=2111.85 00:33:28.119 clat percentiles (usec): 00:33:28.119 | 1.00th=[ 9896], 5.00th=[11600], 10.00th=[12125], 20.00th=[12649], 00:33:28.119 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13435], 60.00th=[13698], 00:33:28.119 | 70.00th=[13960], 80.00th=[14222], 90.00th=[14746], 95.00th=[15270], 00:33:28.119 | 99.00th=[16057], 99.50th=[16581], 99.90th=[53216], 99.95th=[53740], 00:33:28.119 | 99.99th=[54264] 00:33:28.119 bw ( KiB/s): min=25344, max=29696, per=33.31%, avg=28482.75, stdev=941.20, samples=20 00:33:28.119 iops : min= 198, max= 232, avg=222.50, stdev= 7.37, samples=20 00:33:28.119 lat (msec) : 10=1.21%, 20=98.56%, 50=0.09%, 100=0.13% 00:33:28.119 cpu : usr=95.45%, sys=4.33%, ctx=16, majf=0, minf=126 00:33:28.119 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:28.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:28.119 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:28.119 issued rwts: total=2227,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:28.119 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:28.119 00:33:28.119 Run status group 0 (all jobs): 00:33:28.120 READ: bw=83.5MiB/s (87.6MB/s), 27.6MiB/s-28.3MiB/s (28.9MB/s-29.7MB/s), io=839MiB (880MB), run=10034-10046msec 00:33:28.120 05:48:29 -- target/dif.sh@132 -- # destroy_subsystems 0 00:33:28.120 05:48:29 -- target/dif.sh@43 -- # local sub 00:33:28.120 05:48:29 -- target/dif.sh@45 -- # for sub in "$@" 00:33:28.120 05:48:29 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:28.120 05:48:29 -- target/dif.sh@36 -- # local sub_id=0 00:33:28.120 05:48:29 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:28.120 05:48:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.120 05:48:29 -- common/autotest_common.sh@10 -- # set +x 00:33:28.120 05:48:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.120 05:48:29 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:28.120 05:48:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.120 05:48:29 -- common/autotest_common.sh@10 -- # set +x 00:33:28.120 05:48:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.120 00:33:28.120 real 0m11.217s 00:33:28.120 user 0m40.146s 00:33:28.120 sys 0m1.635s 00:33:28.120 05:48:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:33:28.120 05:48:29 -- common/autotest_common.sh@10 -- # set +x 00:33:28.120 ************************************ 00:33:28.120 END TEST fio_dif_digest 00:33:28.120 ************************************ 00:33:28.120 05:48:29 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:33:28.120 05:48:29 -- target/dif.sh@147 -- # nvmftestfini 00:33:28.120 05:48:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:28.120 05:48:29 -- nvmf/common.sh@116 -- # sync 00:33:28.120 05:48:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:28.120 05:48:29 -- nvmf/common.sh@119 -- # set +e 00:33:28.120 05:48:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:28.120 05:48:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:28.120 rmmod nvme_tcp 00:33:28.120 rmmod nvme_fabrics 00:33:28.120 rmmod nvme_keyring 00:33:28.120 05:48:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:28.120 05:48:29 -- nvmf/common.sh@123 -- # set -e 00:33:28.120 05:48:29 -- nvmf/common.sh@124 -- # return 0 00:33:28.120 05:48:29 -- nvmf/common.sh@477 -- # '[' -n 2050682 ']' 00:33:28.120 05:48:29 -- nvmf/common.sh@478 -- # killprocess 2050682 00:33:28.120 05:48:29 -- common/autotest_common.sh@936 -- # '[' -z 2050682 ']' 00:33:28.120 05:48:29 -- common/autotest_common.sh@940 -- # kill -0 2050682 00:33:28.120 05:48:29 -- common/autotest_common.sh@941 -- # uname 00:33:28.120 05:48:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:28.120 05:48:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2050682 00:33:28.120 05:48:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:33:28.120 05:48:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:33:28.120 05:48:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2050682' 00:33:28.120 killing process with pid 2050682 00:33:28.120 05:48:29 -- common/autotest_common.sh@955 -- # kill 2050682 00:33:28.120 05:48:29 -- common/autotest_common.sh@960 -- # wait 2050682 00:33:28.120 05:48:29 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:33:28.120 05:48:29 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:30.036 Waiting for block devices as requested 00:33:30.036 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:30.297 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:30.297 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:30.297 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:30.558 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:30.558 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:30.558 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:30.819 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:30.819 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:31.080 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:31.080 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:31.080 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:31.341 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:31.341 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:31.341 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:31.341 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:31.602 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:31.863 05:48:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:31.863 05:48:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:31.863 05:48:34 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:31.863 05:48:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:31.863 05:48:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:31.863 05:48:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:31.863 05:48:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:33.771 05:48:36 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:33:33.771 00:33:33.771 real 1m17.743s 00:33:33.771 user 7m59.578s 00:33:33.771 sys 0m20.840s 00:33:33.771 05:48:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:33:33.771 05:48:36 -- common/autotest_common.sh@10 -- # set +x 00:33:33.771 ************************************ 00:33:33.771 END TEST nvmf_dif 00:33:33.771 ************************************ 00:33:33.771 05:48:37 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:33.771 05:48:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:33.771 05:48:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:33.771 05:48:37 -- common/autotest_common.sh@10 -- # set +x 00:33:34.031 ************************************ 00:33:34.031 START TEST nvmf_abort_qd_sizes 00:33:34.031 ************************************ 00:33:34.031 05:48:37 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:34.031 * Looking for test storage... 00:33:34.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:34.031 05:48:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:33:34.031 05:48:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:33:34.031 05:48:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:33:34.031 05:48:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:33:34.031 05:48:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:33:34.031 05:48:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:33:34.031 05:48:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:33:34.031 05:48:37 -- scripts/common.sh@335 -- # IFS=.-: 00:33:34.031 05:48:37 -- scripts/common.sh@335 -- # read -ra ver1 00:33:34.031 05:48:37 -- scripts/common.sh@336 -- # IFS=.-: 00:33:34.031 05:48:37 -- scripts/common.sh@336 -- # read -ra ver2 00:33:34.031 05:48:37 -- scripts/common.sh@337 -- # local 'op=<' 00:33:34.031 05:48:37 -- scripts/common.sh@339 -- # ver1_l=2 00:33:34.031 05:48:37 -- scripts/common.sh@340 -- # ver2_l=1 00:33:34.031 05:48:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:33:34.031 05:48:37 -- scripts/common.sh@343 -- # case "$op" in 00:33:34.031 05:48:37 -- scripts/common.sh@344 -- # : 1 00:33:34.031 05:48:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:33:34.031 05:48:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:34.031 05:48:37 -- scripts/common.sh@364 -- # decimal 1 00:33:34.031 05:48:37 -- scripts/common.sh@352 -- # local d=1 00:33:34.031 05:48:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:34.031 05:48:37 -- scripts/common.sh@354 -- # echo 1 00:33:34.031 05:48:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:33:34.031 05:48:37 -- scripts/common.sh@365 -- # decimal 2 00:33:34.031 05:48:37 -- scripts/common.sh@352 -- # local d=2 00:33:34.031 05:48:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:34.031 05:48:37 -- scripts/common.sh@354 -- # echo 2 00:33:34.031 05:48:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:33:34.031 05:48:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:33:34.031 05:48:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:33:34.031 05:48:37 -- scripts/common.sh@367 -- # return 0 00:33:34.031 05:48:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:34.031 05:48:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:33:34.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.031 --rc genhtml_branch_coverage=1 00:33:34.031 --rc genhtml_function_coverage=1 00:33:34.031 --rc genhtml_legend=1 00:33:34.031 --rc geninfo_all_blocks=1 00:33:34.031 --rc geninfo_unexecuted_blocks=1 00:33:34.031 00:33:34.031 ' 00:33:34.031 05:48:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:33:34.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.031 --rc genhtml_branch_coverage=1 00:33:34.031 --rc genhtml_function_coverage=1 00:33:34.031 --rc genhtml_legend=1 00:33:34.031 --rc geninfo_all_blocks=1 00:33:34.031 --rc geninfo_unexecuted_blocks=1 00:33:34.031 00:33:34.031 ' 00:33:34.031 05:48:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:33:34.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.031 --rc genhtml_branch_coverage=1 00:33:34.031 --rc genhtml_function_coverage=1 00:33:34.031 --rc genhtml_legend=1 00:33:34.031 --rc geninfo_all_blocks=1 00:33:34.031 --rc geninfo_unexecuted_blocks=1 00:33:34.031 00:33:34.031 ' 00:33:34.031 05:48:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:33:34.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.031 --rc genhtml_branch_coverage=1 00:33:34.031 --rc genhtml_function_coverage=1 00:33:34.031 --rc genhtml_legend=1 00:33:34.031 --rc geninfo_all_blocks=1 00:33:34.031 --rc geninfo_unexecuted_blocks=1 00:33:34.031 00:33:34.031 ' 00:33:34.031 05:48:37 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:34.031 05:48:37 -- nvmf/common.sh@7 -- # uname -s 00:33:34.031 05:48:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:34.031 05:48:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:34.031 05:48:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:34.031 05:48:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:34.031 05:48:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:34.031 05:48:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:34.031 05:48:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:34.031 05:48:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:34.031 05:48:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:34.031 05:48:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:34.031 05:48:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:34.031 05:48:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:34.031 05:48:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:34.031 05:48:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:34.031 05:48:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:34.031 05:48:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:34.031 05:48:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:34.031 05:48:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:34.031 05:48:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:34.031 05:48:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.032 05:48:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.032 05:48:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.032 05:48:37 -- paths/export.sh@5 -- # export PATH 00:33:34.032 05:48:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.032 05:48:37 -- nvmf/common.sh@46 -- # : 0 00:33:34.032 05:48:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:34.032 05:48:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:34.032 05:48:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:34.032 05:48:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:34.032 05:48:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:34.032 05:48:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:34.032 05:48:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:34.032 05:48:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:34.032 05:48:37 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:33:34.032 05:48:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:34.032 05:48:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:34.032 05:48:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:34.032 05:48:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:34.032 05:48:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:34.032 05:48:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:34.032 05:48:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:34.032 05:48:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:34.032 05:48:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:33:34.032 05:48:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:33:34.032 05:48:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:33:34.032 05:48:37 -- common/autotest_common.sh@10 -- # set +x 00:33:42.164 05:48:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:42.164 05:48:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:33:42.164 05:48:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:33:42.164 05:48:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:33:42.164 05:48:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:33:42.164 05:48:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:33:42.164 05:48:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:33:42.164 05:48:44 -- nvmf/common.sh@294 -- # net_devs=() 00:33:42.164 05:48:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:33:42.164 05:48:44 -- nvmf/common.sh@295 -- # e810=() 00:33:42.164 05:48:44 -- nvmf/common.sh@295 -- # local -ga e810 00:33:42.164 05:48:44 -- nvmf/common.sh@296 -- # x722=() 00:33:42.164 05:48:44 -- nvmf/common.sh@296 -- # local -ga x722 00:33:42.164 05:48:44 -- nvmf/common.sh@297 -- # mlx=() 00:33:42.164 05:48:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:33:42.164 05:48:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:42.164 05:48:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:42.164 05:48:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:42.164 05:48:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:42.164 05:48:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:42.164 05:48:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:42.164 05:48:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:42.164 05:48:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:42.164 05:48:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:42.164 05:48:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:42.164 05:48:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:42.164 05:48:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:33:42.164 05:48:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:33:42.164 05:48:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:33:42.164 05:48:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:33:42.164 05:48:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:33:42.164 05:48:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:33:42.164 05:48:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:42.164 05:48:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:42.164 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:42.164 05:48:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:42.164 05:48:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:42.164 05:48:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:42.164 05:48:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:42.164 05:48:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:42.164 05:48:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:42.164 05:48:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:42.164 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:42.164 05:48:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:42.164 05:48:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:42.164 05:48:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:42.164 05:48:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:42.164 05:48:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:42.164 05:48:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:33:42.164 05:48:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:33:42.164 05:48:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:33:42.165 05:48:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:42.165 05:48:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:42.165 05:48:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:42.165 05:48:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:42.165 05:48:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:42.165 Found net devices under 0000:31:00.0: cvl_0_0 00:33:42.165 05:48:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:42.165 05:48:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:42.165 05:48:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:42.165 05:48:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:42.165 05:48:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:42.165 05:48:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:42.165 Found net devices under 0000:31:00.1: cvl_0_1 00:33:42.165 05:48:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:42.165 05:48:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:33:42.165 05:48:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:33:42.165 05:48:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:33:42.165 05:48:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:33:42.165 05:48:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:33:42.165 05:48:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:42.165 05:48:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:42.165 05:48:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:42.165 05:48:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:33:42.165 05:48:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:42.165 05:48:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:42.165 05:48:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:33:42.165 05:48:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:42.165 05:48:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:42.165 05:48:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:33:42.165 05:48:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:33:42.165 05:48:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:33:42.165 05:48:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:42.165 05:48:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:42.165 05:48:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:42.165 05:48:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:33:42.165 05:48:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:42.165 05:48:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:42.165 05:48:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:42.165 05:48:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:33:42.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:42.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:33:42.165 00:33:42.165 --- 10.0.0.2 ping statistics --- 00:33:42.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:42.165 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:33:42.165 05:48:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:42.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:42.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:33:42.165 00:33:42.165 --- 10.0.0.1 ping statistics --- 00:33:42.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:42.165 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:33:42.165 05:48:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:42.165 05:48:44 -- nvmf/common.sh@410 -- # return 0 00:33:42.165 05:48:44 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:33:42.165 05:48:44 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:45.479 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:45.479 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:45.479 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:45.479 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:45.479 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:45.479 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:45.479 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:45.479 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:45.479 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:45.479 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:45.479 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:45.479 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:45.479 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:45.479 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:45.479 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:45.479 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:45.479 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:33:45.479 05:48:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:45.479 05:48:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:45.479 05:48:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:45.479 05:48:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:45.479 05:48:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:45.479 05:48:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:45.479 05:48:48 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:33:45.479 05:48:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:45.479 05:48:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:45.479 05:48:48 -- common/autotest_common.sh@10 -- # set +x 00:33:45.479 05:48:48 -- nvmf/common.sh@469 -- # nvmfpid=2071460 00:33:45.479 05:48:48 -- nvmf/common.sh@470 -- # waitforlisten 2071460 00:33:45.479 05:48:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:33:45.479 05:48:48 -- common/autotest_common.sh@829 -- # '[' -z 2071460 ']' 00:33:45.479 05:48:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:45.479 05:48:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:45.479 05:48:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:45.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:45.479 05:48:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:45.479 05:48:48 -- common/autotest_common.sh@10 -- # set +x 00:33:45.479 [2024-12-07 05:48:48.670893] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:33:45.479 [2024-12-07 05:48:48.670940] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:45.479 EAL: No free 2048 kB hugepages reported on node 1 00:33:45.741 [2024-12-07 05:48:48.739331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:45.741 [2024-12-07 05:48:48.803552] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:45.741 [2024-12-07 05:48:48.803683] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:45.741 [2024-12-07 05:48:48.803693] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:45.741 [2024-12-07 05:48:48.803702] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:45.741 [2024-12-07 05:48:48.803837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:45.741 [2024-12-07 05:48:48.803985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:45.741 [2024-12-07 05:48:48.804138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:45.741 [2024-12-07 05:48:48.804138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:46.309 05:48:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:46.309 05:48:49 -- common/autotest_common.sh@862 -- # return 0 00:33:46.309 05:48:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:46.309 05:48:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:46.309 05:48:49 -- common/autotest_common.sh@10 -- # set +x 00:33:46.309 05:48:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:46.309 05:48:49 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:33:46.309 05:48:49 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:33:46.309 05:48:49 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:33:46.309 05:48:49 -- scripts/common.sh@311 -- # local bdf bdfs 00:33:46.309 05:48:49 -- scripts/common.sh@312 -- # local nvmes 00:33:46.309 05:48:49 -- scripts/common.sh@314 -- # [[ -n 0000:65:00.0 ]] 00:33:46.309 05:48:49 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:33:46.309 05:48:49 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:33:46.309 05:48:49 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:33:46.309 05:48:49 -- scripts/common.sh@322 -- # uname -s 00:33:46.309 05:48:49 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:33:46.309 05:48:49 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:33:46.309 05:48:49 -- scripts/common.sh@327 -- # (( 1 )) 00:33:46.309 05:48:49 -- scripts/common.sh@328 -- # printf '%s\n' 0000:65:00.0 00:33:46.309 05:48:49 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:33:46.309 05:48:49 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:65:00.0 00:33:46.309 05:48:49 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:33:46.309 05:48:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:46.309 05:48:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:46.310 05:48:49 -- common/autotest_common.sh@10 -- # set +x 00:33:46.310 ************************************ 00:33:46.310 START TEST spdk_target_abort 00:33:46.310 ************************************ 00:33:46.310 05:48:49 -- common/autotest_common.sh@1114 -- # spdk_target 00:33:46.310 05:48:49 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:33:46.310 05:48:49 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:33:46.310 05:48:49 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:33:46.310 05:48:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.310 05:48:49 -- common/autotest_common.sh@10 -- # set +x 00:33:46.878 spdk_targetn1 00:33:46.878 05:48:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.878 05:48:49 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:46.878 05:48:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.878 05:48:49 -- common/autotest_common.sh@10 -- # set +x 00:33:46.878 [2024-12-07 05:48:49.822987] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:46.878 05:48:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.878 05:48:49 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:33:46.878 05:48:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.878 05:48:49 -- common/autotest_common.sh@10 -- # set +x 00:33:46.878 05:48:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.878 05:48:49 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:33:46.878 05:48:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.878 05:48:49 -- common/autotest_common.sh@10 -- # set +x 00:33:46.878 05:48:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.878 05:48:49 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:33:46.878 05:48:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.878 05:48:49 -- common/autotest_common.sh@10 -- # set +x 00:33:46.878 [2024-12-07 05:48:49.851308] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:46.878 05:48:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.878 05:48:49 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:33:46.878 05:48:49 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:46.878 05:48:49 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:46.878 05:48:49 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:33:46.878 05:48:49 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:46.878 05:48:49 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:33:46.878 05:48:49 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:46.878 05:48:49 -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:46.878 05:48:49 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:46.878 05:48:49 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:46.878 05:48:49 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:46.878 05:48:49 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:46.878 05:48:49 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:46.878 05:48:49 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:46.878 05:48:49 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:33:46.878 05:48:49 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:46.878 05:48:49 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:46.878 05:48:49 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:46.878 05:48:49 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:33:46.878 05:48:49 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:46.878 05:48:49 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:33:46.878 EAL: No free 2048 kB hugepages reported on node 1 00:33:46.878 [2024-12-07 05:48:49.986499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:560 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:33:46.878 [2024-12-07 05:48:49.986521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0048 p:1 m:0 dnr:0 00:33:46.878 [2024-12-07 05:48:49.994466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:840 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:33:46.878 [2024-12-07 05:48:49.994481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:006c p:1 m:0 dnr:0 00:33:46.878 [2024-12-07 05:48:50.010491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1400 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:33:46.878 [2024-12-07 05:48:50.010509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00b0 p:1 m:0 dnr:0 00:33:46.878 [2024-12-07 05:48:50.033476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2192 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:33:46.878 [2024-12-07 05:48:50.033492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:46.878 [2024-12-07 05:48:50.065515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3344 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:33:46.878 [2024-12-07 05:48:50.065531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00a5 p:0 m:0 dnr:0 00:33:50.175 Initializing NVMe Controllers 00:33:50.175 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:33:50.175 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:33:50.175 Initialization complete. Launching workers. 00:33:50.175 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 12137, failed: 5 00:33:50.175 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 3006, failed to submit 9136 00:33:50.175 success 676, unsuccess 2330, failed 0 00:33:50.175 05:48:53 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:50.175 05:48:53 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:33:50.175 EAL: No free 2048 kB hugepages reported on node 1 00:33:50.175 [2024-12-07 05:48:53.294077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:168 nsid:1 lba:296 len:8 PRP1 0x200007c5a000 PRP2 0x0 00:33:50.175 [2024-12-07 05:48:53.294117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:168 cdw0:0 sqhd:0033 p:1 m:0 dnr:0 00:33:50.175 [2024-12-07 05:48:53.302021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:480 len:8 PRP1 0x200007c48000 PRP2 0x0 00:33:50.175 [2024-12-07 05:48:53.302044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:003f p:1 m:0 dnr:0 00:33:50.175 [2024-12-07 05:48:53.334129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:1208 len:8 PRP1 0x200007c42000 PRP2 0x0 00:33:50.175 [2024-12-07 05:48:53.334152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:009f p:1 m:0 dnr:0 00:33:50.175 [2024-12-07 05:48:53.366199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:189 nsid:1 lba:1976 len:8 PRP1 0x200007c56000 PRP2 0x0 00:33:50.175 [2024-12-07 05:48:53.366221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:189 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:33:50.175 [2024-12-07 05:48:53.382120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:171 nsid:1 lba:2344 len:8 PRP1 0x200007c4a000 PRP2 0x0 00:33:50.175 [2024-12-07 05:48:53.382141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:171 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:50.435 [2024-12-07 05:48:53.421166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:183 nsid:1 lba:3360 len:8 PRP1 0x200007c58000 PRP2 0x0 00:33:50.435 [2024-12-07 05:48:53.421189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:183 cdw0:0 sqhd:00b2 p:0 m:0 dnr:0 00:33:51.872 [2024-12-07 05:48:54.938313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:38104 len:8 PRP1 0x200007c3a000 PRP2 0x0 00:33:51.872 [2024-12-07 05:48:54.938346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:009f p:1 m:0 dnr:0 00:33:52.441 [2024-12-07 05:48:55.472999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:170 nsid:1 lba:50640 len:8 PRP1 0x200007c42000 PRP2 0x0 00:33:52.441 [2024-12-07 05:48:55.473034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:170 cdw0:0 sqhd:00bc p:1 m:0 dnr:0 00:33:52.701 [2024-12-07 05:48:55.823256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:180 nsid:1 lba:58696 len:8 PRP1 0x200007c56000 PRP2 0x0 00:33:52.701 [2024-12-07 05:48:55.823285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:180 cdw0:0 sqhd:00aa p:1 m:0 dnr:0 00:33:52.959 [2024-12-07 05:48:56.023238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:176 nsid:1 lba:63344 len:8 PRP1 0x200007c56000 PRP2 0x0 00:33:52.959 [2024-12-07 05:48:56.023263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:176 cdw0:0 sqhd:00f9 p:1 m:0 dnr:0 00:33:53.218 Initializing NVMe Controllers 00:33:53.218 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:33:53.218 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:33:53.218 Initialization complete. Launching workers. 00:33:53.218 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8648, failed: 10 00:33:53.218 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1204, failed to submit 7454 00:33:53.218 success 396, unsuccess 808, failed 0 00:33:53.218 05:48:56 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:53.218 05:48:56 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:33:53.478 EAL: No free 2048 kB hugepages reported on node 1 00:33:53.478 [2024-12-07 05:48:56.564947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:154 nsid:1 lba:1976 len:8 PRP1 0x2000078d6000 PRP2 0x0 00:33:53.478 [2024-12-07 05:48:56.564970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:154 cdw0:0 sqhd:0067 p:1 m:0 dnr:0 00:33:53.737 [2024-12-07 05:48:56.929260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:162 nsid:1 lba:44328 len:8 PRP1 0x2000078c8000 PRP2 0x0 00:33:53.737 [2024-12-07 05:48:56.929282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:162 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:57.038 Initializing NVMe Controllers 00:33:57.038 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:33:57.038 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:33:57.038 Initialization complete. Launching workers. 00:33:57.038 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 43720, failed: 2 00:33:57.038 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2703, failed to submit 41019 00:33:57.038 success 626, unsuccess 2077, failed 0 00:33:57.038 05:48:59 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:33:57.038 05:48:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.038 05:48:59 -- common/autotest_common.sh@10 -- # set +x 00:33:57.038 05:48:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.038 05:48:59 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:33:57.038 05:48:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.038 05:48:59 -- common/autotest_common.sh@10 -- # set +x 00:33:58.419 05:49:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.419 05:49:01 -- target/abort_qd_sizes.sh@62 -- # killprocess 2071460 00:33:58.419 05:49:01 -- common/autotest_common.sh@936 -- # '[' -z 2071460 ']' 00:33:58.419 05:49:01 -- common/autotest_common.sh@940 -- # kill -0 2071460 00:33:58.420 05:49:01 -- common/autotest_common.sh@941 -- # uname 00:33:58.420 05:49:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:58.420 05:49:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2071460 00:33:58.420 05:49:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:33:58.420 05:49:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:33:58.420 05:49:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2071460' 00:33:58.420 killing process with pid 2071460 00:33:58.420 05:49:01 -- common/autotest_common.sh@955 -- # kill 2071460 00:33:58.420 05:49:01 -- common/autotest_common.sh@960 -- # wait 2071460 00:33:58.420 00:33:58.420 real 0m12.115s 00:33:58.420 user 0m49.344s 00:33:58.420 sys 0m1.738s 00:33:58.420 05:49:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:33:58.420 05:49:01 -- common/autotest_common.sh@10 -- # set +x 00:33:58.420 ************************************ 00:33:58.420 END TEST spdk_target_abort 00:33:58.420 ************************************ 00:33:58.680 05:49:01 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:33:58.680 05:49:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:58.680 05:49:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:58.680 05:49:01 -- common/autotest_common.sh@10 -- # set +x 00:33:58.680 ************************************ 00:33:58.680 START TEST kernel_target_abort 00:33:58.680 ************************************ 00:33:58.680 05:49:01 -- common/autotest_common.sh@1114 -- # kernel_target 00:33:58.680 05:49:01 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:33:58.680 05:49:01 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:33:58.680 05:49:01 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:33:58.680 05:49:01 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:33:58.680 05:49:01 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:33:58.680 05:49:01 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:33:58.680 05:49:01 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:58.680 05:49:01 -- nvmf/common.sh@627 -- # local block nvme 00:33:58.680 05:49:01 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:33:58.680 05:49:01 -- nvmf/common.sh@630 -- # modprobe nvmet 00:33:58.680 05:49:01 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:58.680 05:49:01 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:01.984 Waiting for block devices as requested 00:34:01.984 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:34:01.984 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:34:02.243 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:34:02.243 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:34:02.243 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:34:02.504 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:34:02.504 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:34:02.504 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:34:02.764 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:34:02.764 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:34:03.024 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:34:03.024 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:34:03.024 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:34:03.024 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:34:03.284 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:34:03.284 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:34:03.284 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:34:03.545 05:49:06 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:34:03.545 05:49:06 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:03.545 05:49:06 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:34:03.545 05:49:06 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:34:03.545 05:49:06 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:03.807 No valid GPT data, bailing 00:34:03.807 05:49:06 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:03.807 05:49:06 -- scripts/common.sh@393 -- # pt= 00:34:03.807 05:49:06 -- scripts/common.sh@394 -- # return 1 00:34:03.807 05:49:06 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:34:03.807 05:49:06 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:34:03.807 05:49:06 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:34:03.807 05:49:06 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:34:03.807 05:49:06 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:03.807 05:49:06 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:34:03.807 05:49:06 -- nvmf/common.sh@654 -- # echo 1 00:34:03.807 05:49:06 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:34:03.807 05:49:06 -- nvmf/common.sh@656 -- # echo 1 00:34:03.807 05:49:06 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:34:03.807 05:49:06 -- nvmf/common.sh@663 -- # echo tcp 00:34:03.807 05:49:06 -- nvmf/common.sh@664 -- # echo 4420 00:34:03.807 05:49:06 -- nvmf/common.sh@665 -- # echo ipv4 00:34:03.807 05:49:06 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:03.807 05:49:06 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:34:03.807 00:34:03.807 Discovery Log Number of Records 2, Generation counter 2 00:34:03.807 =====Discovery Log Entry 0====== 00:34:03.807 trtype: tcp 00:34:03.807 adrfam: ipv4 00:34:03.807 subtype: current discovery subsystem 00:34:03.807 treq: not specified, sq flow control disable supported 00:34:03.807 portid: 1 00:34:03.807 trsvcid: 4420 00:34:03.807 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:03.807 traddr: 10.0.0.1 00:34:03.807 eflags: none 00:34:03.807 sectype: none 00:34:03.807 =====Discovery Log Entry 1====== 00:34:03.807 trtype: tcp 00:34:03.807 adrfam: ipv4 00:34:03.807 subtype: nvme subsystem 00:34:03.807 treq: not specified, sq flow control disable supported 00:34:03.807 portid: 1 00:34:03.807 trsvcid: 4420 00:34:03.807 subnqn: kernel_target 00:34:03.807 traddr: 10.0.0.1 00:34:03.807 eflags: none 00:34:03.807 sectype: none 00:34:03.807 05:49:06 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:34:03.807 05:49:06 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:03.807 05:49:06 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:03.807 05:49:06 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:34:03.807 05:49:06 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:03.807 05:49:06 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:34:03.807 05:49:06 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:03.807 05:49:06 -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:03.807 05:49:06 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:03.807 05:49:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:03.807 05:49:06 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:03.807 05:49:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:03.807 05:49:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:03.807 05:49:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:03.807 05:49:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:34:03.807 05:49:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:03.807 05:49:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:34:03.807 05:49:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:03.807 05:49:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:34:03.807 05:49:06 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:03.807 05:49:06 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:34:03.807 EAL: No free 2048 kB hugepages reported on node 1 00:34:07.114 Initializing NVMe Controllers 00:34:07.114 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:34:07.114 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:34:07.114 Initialization complete. Launching workers. 00:34:07.114 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 68808, failed: 0 00:34:07.114 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 68808, failed to submit 0 00:34:07.114 success 0, unsuccess 68808, failed 0 00:34:07.114 05:49:10 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:07.114 05:49:10 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:34:07.114 EAL: No free 2048 kB hugepages reported on node 1 00:34:10.411 Initializing NVMe Controllers 00:34:10.411 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:34:10.411 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:34:10.411 Initialization complete. Launching workers. 00:34:10.411 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 111125, failed: 0 00:34:10.411 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 27970, failed to submit 83155 00:34:10.411 success 0, unsuccess 27970, failed 0 00:34:10.411 05:49:13 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:10.411 05:49:13 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:34:10.411 EAL: No free 2048 kB hugepages reported on node 1 00:34:12.952 Initializing NVMe Controllers 00:34:12.952 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:34:12.952 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:34:12.952 Initialization complete. Launching workers. 00:34:12.952 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 105902, failed: 0 00:34:12.952 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 26446, failed to submit 79456 00:34:12.952 success 0, unsuccess 26446, failed 0 00:34:12.953 05:49:16 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:34:12.953 05:49:16 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:34:12.953 05:49:16 -- nvmf/common.sh@677 -- # echo 0 00:34:13.212 05:49:16 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:34:13.212 05:49:16 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:34:13.212 05:49:16 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:13.213 05:49:16 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:34:13.213 05:49:16 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:34:13.213 05:49:16 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:34:13.213 00:34:13.213 real 0m14.586s 00:34:13.213 user 0m8.440s 00:34:13.213 sys 0m3.586s 00:34:13.213 05:49:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:34:13.213 05:49:16 -- common/autotest_common.sh@10 -- # set +x 00:34:13.213 ************************************ 00:34:13.213 END TEST kernel_target_abort 00:34:13.213 ************************************ 00:34:13.213 05:49:16 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:34:13.213 05:49:16 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:34:13.213 05:49:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:34:13.213 05:49:16 -- nvmf/common.sh@116 -- # sync 00:34:13.213 05:49:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:34:13.213 05:49:16 -- nvmf/common.sh@119 -- # set +e 00:34:13.213 05:49:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:34:13.213 05:49:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:34:13.213 rmmod nvme_tcp 00:34:13.213 rmmod nvme_fabrics 00:34:13.213 rmmod nvme_keyring 00:34:13.213 05:49:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:34:13.213 05:49:16 -- nvmf/common.sh@123 -- # set -e 00:34:13.213 05:49:16 -- nvmf/common.sh@124 -- # return 0 00:34:13.213 05:49:16 -- nvmf/common.sh@477 -- # '[' -n 2071460 ']' 00:34:13.213 05:49:16 -- nvmf/common.sh@478 -- # killprocess 2071460 00:34:13.213 05:49:16 -- common/autotest_common.sh@936 -- # '[' -z 2071460 ']' 00:34:13.213 05:49:16 -- common/autotest_common.sh@940 -- # kill -0 2071460 00:34:13.213 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2071460) - No such process 00:34:13.213 05:49:16 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2071460 is not found' 00:34:13.213 Process with pid 2071460 is not found 00:34:13.213 05:49:16 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:34:13.213 05:49:16 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:16.666 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:34:16.666 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:34:16.926 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:34:16.926 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:34:16.926 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:34:16.926 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:34:16.926 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:34:16.926 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:34:16.926 0000:65:00.0 (144d a80a): Already using the nvme driver 00:34:16.926 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:34:16.926 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:34:16.926 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:34:16.926 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:34:17.185 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:34:17.185 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:34:17.185 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:34:17.185 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:34:17.445 05:49:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:34:17.445 05:49:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:34:17.445 05:49:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:17.445 05:49:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:34:17.445 05:49:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:17.445 05:49:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:17.445 05:49:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:19.985 05:49:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:34:19.985 00:34:19.985 real 0m45.609s 00:34:19.985 user 1m3.129s 00:34:19.985 sys 0m16.421s 00:34:19.985 05:49:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:34:19.985 05:49:22 -- common/autotest_common.sh@10 -- # set +x 00:34:19.985 ************************************ 00:34:19.985 END TEST nvmf_abort_qd_sizes 00:34:19.985 ************************************ 00:34:19.985 05:49:22 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:34:19.985 05:49:22 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:34:19.985 05:49:22 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:34:19.985 05:49:22 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:34:19.985 05:49:22 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:34:19.985 05:49:22 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:34:19.985 05:49:22 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:34:19.985 05:49:22 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:34:19.985 05:49:22 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:34:19.985 05:49:22 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:34:19.985 05:49:22 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:19.985 05:49:22 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:34:19.985 05:49:22 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:34:19.985 05:49:22 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:34:19.985 05:49:22 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:34:19.985 05:49:22 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:34:19.985 05:49:22 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:34:19.985 05:49:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:19.985 05:49:22 -- common/autotest_common.sh@10 -- # set +x 00:34:19.985 05:49:22 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:34:19.985 05:49:22 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:34:19.985 05:49:22 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:34:19.985 05:49:22 -- common/autotest_common.sh@10 -- # set +x 00:34:26.568 INFO: APP EXITING 00:34:26.568 INFO: killing all VMs 00:34:26.568 INFO: killing vhost app 00:34:26.568 INFO: EXIT DONE 00:34:29.868 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:34:30.129 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:34:30.129 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:34:30.129 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:34:30.129 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:34:30.129 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:34:30.129 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:34:30.129 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:34:30.129 0000:65:00.0 (144d a80a): Already using the nvme driver 00:34:30.129 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:34:30.129 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:34:30.129 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:34:30.129 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:34:30.390 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:34:30.390 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:34:30.390 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:34:30.390 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:34:34.595 Cleaning 00:34:34.595 Removing: /var/run/dpdk/spdk0/config 00:34:34.595 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:34.595 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:34.595 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:34.595 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:34.595 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:34:34.595 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:34:34.595 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:34:34.595 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:34:34.595 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:34.595 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:34.595 Removing: /var/run/dpdk/spdk1/config 00:34:34.595 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:34:34.595 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:34:34.595 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:34:34.595 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:34:34.595 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:34:34.595 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:34:34.595 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:34:34.595 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:34:34.595 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:34:34.595 Removing: /var/run/dpdk/spdk1/hugepage_info 00:34:34.595 Removing: /var/run/dpdk/spdk1/mp_socket 00:34:34.595 Removing: /var/run/dpdk/spdk2/config 00:34:34.595 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:34:34.595 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:34:34.595 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:34:34.595 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:34:34.595 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:34:34.595 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:34:34.595 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:34:34.595 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:34:34.595 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:34:34.595 Removing: /var/run/dpdk/spdk2/hugepage_info 00:34:34.595 Removing: /var/run/dpdk/spdk3/config 00:34:34.595 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:34:34.595 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:34:34.595 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:34:34.595 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:34:34.595 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:34:34.595 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:34:34.595 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:34:34.595 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:34:34.595 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:34:34.595 Removing: /var/run/dpdk/spdk3/hugepage_info 00:34:34.595 Removing: /var/run/dpdk/spdk4/config 00:34:34.595 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:34:34.595 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:34:34.595 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:34:34.595 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:34:34.595 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:34:34.595 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:34:34.595 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:34:34.595 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:34:34.595 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:34:34.595 Removing: /var/run/dpdk/spdk4/hugepage_info 00:34:34.595 Removing: /dev/shm/bdev_svc_trace.1 00:34:34.595 Removing: /dev/shm/nvmf_trace.0 00:34:34.595 Removing: /dev/shm/spdk_tgt_trace.pid1600071 00:34:34.595 Removing: /var/run/dpdk/spdk0 00:34:34.595 Removing: /var/run/dpdk/spdk1 00:34:34.595 Removing: /var/run/dpdk/spdk2 00:34:34.595 Removing: /var/run/dpdk/spdk3 00:34:34.595 Removing: /var/run/dpdk/spdk4 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1598574 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1600071 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1600962 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1602003 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1602620 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1602977 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1603373 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1603783 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1604188 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1604542 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1604689 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1604982 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1606368 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1609904 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1610206 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1610424 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1610729 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1611108 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1611374 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1611817 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1611917 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1612217 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1612537 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1612649 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1612918 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1613362 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1613712 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1614119 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1614293 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1614480 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1614563 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1614904 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1615090 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1615276 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1615703 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1616073 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1616327 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1616476 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1616800 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1617134 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1617708 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1618054 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1618306 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1618642 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1618987 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1619216 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1619389 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1619699 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1620049 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1620387 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1620560 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1620761 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1621114 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1621455 00:34:34.595 Removing: /var/run/dpdk/spdk_pid1621686 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1621836 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1622175 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1622514 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1622836 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1622959 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1623239 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1623575 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1623927 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1624120 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1624321 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1624640 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1624994 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1625301 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1625466 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1625705 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1626057 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1626170 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1626551 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1631313 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1732257 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1737679 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1749558 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1756160 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1761678 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1762399 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1772926 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1773375 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1778533 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1785538 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1788495 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1801175 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1812068 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1814308 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1815937 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1836899 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1841596 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1847063 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1848942 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1851285 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1851586 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1851705 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1852003 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1852729 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1854771 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1855862 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1856568 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1863375 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1870616 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1876492 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1922195 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1926944 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1934342 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1935863 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1937564 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1942822 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1947734 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1956959 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1957096 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1962333 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1962529 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1962707 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1963376 00:34:34.596 Removing: /var/run/dpdk/spdk_pid1963466 00:34:34.857 Removing: /var/run/dpdk/spdk_pid1964895 00:34:34.857 Removing: /var/run/dpdk/spdk_pid1967086 00:34:34.857 Removing: /var/run/dpdk/spdk_pid1969047 00:34:34.857 Removing: /var/run/dpdk/spdk_pid1971071 00:34:34.857 Removing: /var/run/dpdk/spdk_pid1973102 00:34:34.857 Removing: /var/run/dpdk/spdk_pid1975123 00:34:34.857 Removing: /var/run/dpdk/spdk_pid1982629 00:34:34.857 Removing: /var/run/dpdk/spdk_pid1983396 00:34:34.857 Removing: /var/run/dpdk/spdk_pid1984513 00:34:34.857 Removing: /var/run/dpdk/spdk_pid1985800 00:34:34.857 Removing: /var/run/dpdk/spdk_pid1992142 00:34:34.857 Removing: /var/run/dpdk/spdk_pid1995383 00:34:34.857 Removing: /var/run/dpdk/spdk_pid2002023 00:34:34.857 Removing: /var/run/dpdk/spdk_pid2009097 00:34:34.857 Removing: /var/run/dpdk/spdk_pid2016413 00:34:34.857 Removing: /var/run/dpdk/spdk_pid2017109 00:34:34.857 Removing: /var/run/dpdk/spdk_pid2017821 00:34:34.857 Removing: /var/run/dpdk/spdk_pid2018574 00:34:34.857 Removing: /var/run/dpdk/spdk_pid2019564 00:34:34.857 Removing: /var/run/dpdk/spdk_pid2020336 00:34:34.857 Removing: /var/run/dpdk/spdk_pid2021168 00:34:34.857 Removing: /var/run/dpdk/spdk_pid2021957 00:34:34.857 Removing: /var/run/dpdk/spdk_pid2027107 00:34:34.857 Removing: /var/run/dpdk/spdk_pid2027440 00:34:34.857 Removing: /var/run/dpdk/spdk_pid2034589 00:34:34.857 Removing: /var/run/dpdk/spdk_pid2034960 00:34:34.857 Removing: /var/run/dpdk/spdk_pid2037499 00:34:34.857 Removing: /var/run/dpdk/spdk_pid2044883 00:34:34.857 Removing: /var/run/dpdk/spdk_pid2044946 00:34:34.857 Removing: /var/run/dpdk/spdk_pid2051058 00:34:34.857 Removing: /var/run/dpdk/spdk_pid2053279 00:34:34.857 Removing: /var/run/dpdk/spdk_pid2055708 00:34:34.857 Removing: /var/run/dpdk/spdk_pid2057034 00:34:34.857 Removing: /var/run/dpdk/spdk_pid2060102 00:34:34.857 Removing: /var/run/dpdk/spdk_pid2061336 00:34:34.857 Removing: /var/run/dpdk/spdk_pid2071725 00:34:34.857 Removing: /var/run/dpdk/spdk_pid2072204 00:34:34.857 Removing: /var/run/dpdk/spdk_pid2072855 00:34:34.857 Removing: /var/run/dpdk/spdk_pid2075859 00:34:34.857 Removing: /var/run/dpdk/spdk_pid2076535 00:34:34.857 Removing: /var/run/dpdk/spdk_pid2077022 00:34:34.857 Clean 00:34:35.118 killing process with pid 1539530 00:34:45.113 killing process with pid 1539527 00:34:45.113 killing process with pid 1539529 00:34:45.113 killing process with pid 1539528 00:34:45.113 05:49:47 -- common/autotest_common.sh@1446 -- # return 0 00:34:45.113 05:49:47 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:34:45.113 05:49:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:45.113 05:49:47 -- common/autotest_common.sh@10 -- # set +x 00:34:45.113 05:49:48 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:34:45.113 05:49:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:45.113 05:49:48 -- common/autotest_common.sh@10 -- # set +x 00:34:45.113 05:49:48 -- spdk/autotest.sh@377 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:45.113 05:49:48 -- spdk/autotest.sh@379 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:34:45.113 05:49:48 -- spdk/autotest.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:34:45.113 05:49:48 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:34:45.113 05:49:48 -- spdk/autotest.sh@383 -- # hostname 00:34:45.113 05:49:48 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:34:45.113 geninfo: WARNING: invalid characters removed from testname! 00:35:07.076 05:50:10 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:09.614 05:50:12 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:10.993 05:50:14 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:12.904 05:50:15 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:13.843 05:50:17 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:15.751 05:50:18 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:17.133 05:50:19 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:17.133 05:50:20 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:35:17.133 05:50:20 -- common/autotest_common.sh@1690 -- $ lcov --version 00:35:17.133 05:50:20 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:35:17.133 05:50:20 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:35:17.133 05:50:20 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:35:17.133 05:50:20 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:35:17.133 05:50:20 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:35:17.133 05:50:20 -- scripts/common.sh@335 -- $ IFS=.-: 00:35:17.133 05:50:20 -- scripts/common.sh@335 -- $ read -ra ver1 00:35:17.133 05:50:20 -- scripts/common.sh@336 -- $ IFS=.-: 00:35:17.133 05:50:20 -- scripts/common.sh@336 -- $ read -ra ver2 00:35:17.133 05:50:20 -- scripts/common.sh@337 -- $ local 'op=<' 00:35:17.133 05:50:20 -- scripts/common.sh@339 -- $ ver1_l=2 00:35:17.133 05:50:20 -- scripts/common.sh@340 -- $ ver2_l=1 00:35:17.133 05:50:20 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:35:17.133 05:50:20 -- scripts/common.sh@343 -- $ case "$op" in 00:35:17.133 05:50:20 -- scripts/common.sh@344 -- $ : 1 00:35:17.133 05:50:20 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:35:17.133 05:50:20 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:17.134 05:50:20 -- scripts/common.sh@364 -- $ decimal 1 00:35:17.134 05:50:20 -- scripts/common.sh@352 -- $ local d=1 00:35:17.134 05:50:20 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:35:17.134 05:50:20 -- scripts/common.sh@354 -- $ echo 1 00:35:17.134 05:50:20 -- scripts/common.sh@364 -- $ ver1[v]=1 00:35:17.134 05:50:20 -- scripts/common.sh@365 -- $ decimal 2 00:35:17.134 05:50:20 -- scripts/common.sh@352 -- $ local d=2 00:35:17.134 05:50:20 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:35:17.134 05:50:20 -- scripts/common.sh@354 -- $ echo 2 00:35:17.134 05:50:20 -- scripts/common.sh@365 -- $ ver2[v]=2 00:35:17.134 05:50:20 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:35:17.134 05:50:20 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:35:17.134 05:50:20 -- scripts/common.sh@367 -- $ return 0 00:35:17.134 05:50:20 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:17.134 05:50:20 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:35:17.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.134 --rc genhtml_branch_coverage=1 00:35:17.134 --rc genhtml_function_coverage=1 00:35:17.134 --rc genhtml_legend=1 00:35:17.134 --rc geninfo_all_blocks=1 00:35:17.134 --rc geninfo_unexecuted_blocks=1 00:35:17.134 00:35:17.134 ' 00:35:17.134 05:50:20 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:35:17.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.134 --rc genhtml_branch_coverage=1 00:35:17.134 --rc genhtml_function_coverage=1 00:35:17.134 --rc genhtml_legend=1 00:35:17.134 --rc geninfo_all_blocks=1 00:35:17.134 --rc geninfo_unexecuted_blocks=1 00:35:17.134 00:35:17.134 ' 00:35:17.134 05:50:20 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:35:17.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.134 --rc genhtml_branch_coverage=1 00:35:17.134 --rc genhtml_function_coverage=1 00:35:17.134 --rc genhtml_legend=1 00:35:17.134 --rc geninfo_all_blocks=1 00:35:17.134 --rc geninfo_unexecuted_blocks=1 00:35:17.134 00:35:17.134 ' 00:35:17.134 05:50:20 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:35:17.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.134 --rc genhtml_branch_coverage=1 00:35:17.134 --rc genhtml_function_coverage=1 00:35:17.134 --rc genhtml_legend=1 00:35:17.134 --rc geninfo_all_blocks=1 00:35:17.134 --rc geninfo_unexecuted_blocks=1 00:35:17.134 00:35:17.134 ' 00:35:17.134 05:50:20 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:17.134 05:50:20 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:35:17.134 05:50:20 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:17.134 05:50:20 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:17.134 05:50:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.134 05:50:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.134 05:50:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.134 05:50:20 -- paths/export.sh@5 -- $ export PATH 00:35:17.134 05:50:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.134 05:50:20 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:35:17.134 05:50:20 -- common/autobuild_common.sh@440 -- $ date +%s 00:35:17.134 05:50:20 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733547020.XXXXXX 00:35:17.134 05:50:20 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733547020.A4e5ZG 00:35:17.134 05:50:20 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:35:17.134 05:50:20 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:35:17.134 05:50:20 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:35:17.134 05:50:20 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:35:17.134 05:50:20 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:35:17.134 05:50:20 -- common/autobuild_common.sh@456 -- $ get_config_params 00:35:17.134 05:50:20 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:35:17.134 05:50:20 -- common/autotest_common.sh@10 -- $ set +x 00:35:17.134 05:50:20 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:35:17.134 05:50:20 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:35:17.134 05:50:20 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:17.134 05:50:20 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:35:17.134 05:50:20 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:35:17.134 05:50:20 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:35:17.134 05:50:20 -- spdk/autopackage.sh@19 -- $ timing_finish 00:35:17.134 05:50:20 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:17.134 05:50:20 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:35:17.134 05:50:20 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:17.134 05:50:20 -- spdk/autopackage.sh@20 -- $ exit 0 00:35:17.134 + [[ -n 1497179 ]] 00:35:17.134 + sudo kill 1497179 00:35:17.145 [Pipeline] } 00:35:17.165 [Pipeline] // stage 00:35:17.171 [Pipeline] } 00:35:17.185 [Pipeline] // timeout 00:35:17.190 [Pipeline] } 00:35:17.207 [Pipeline] // catchError 00:35:17.212 [Pipeline] } 00:35:17.230 [Pipeline] // wrap 00:35:17.236 [Pipeline] } 00:35:17.248 [Pipeline] // catchError 00:35:17.256 [Pipeline] stage 00:35:17.259 [Pipeline] { (Epilogue) 00:35:17.271 [Pipeline] catchError 00:35:17.272 [Pipeline] { 00:35:17.286 [Pipeline] echo 00:35:17.288 Cleanup processes 00:35:17.296 [Pipeline] sh 00:35:17.588 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:17.588 2093615 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:17.606 [Pipeline] sh 00:35:17.899 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:17.899 ++ grep -v 'sudo pgrep' 00:35:17.899 ++ awk '{print $1}' 00:35:17.899 + sudo kill -9 00:35:17.899 + true 00:35:17.913 [Pipeline] sh 00:35:18.203 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:30.448 [Pipeline] sh 00:35:30.735 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:30.735 Artifacts sizes are good 00:35:30.750 [Pipeline] archiveArtifacts 00:35:30.758 Archiving artifacts 00:35:30.960 [Pipeline] sh 00:35:31.377 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:35:31.393 [Pipeline] cleanWs 00:35:31.404 [WS-CLEANUP] Deleting project workspace... 00:35:31.404 [WS-CLEANUP] Deferred wipeout is used... 00:35:31.411 [WS-CLEANUP] done 00:35:31.413 [Pipeline] } 00:35:31.430 [Pipeline] // catchError 00:35:31.445 [Pipeline] sh 00:35:31.738 + logger -p user.info -t JENKINS-CI 00:35:31.748 [Pipeline] } 00:35:31.763 [Pipeline] // stage 00:35:31.770 [Pipeline] } 00:35:31.787 [Pipeline] // node 00:35:31.792 [Pipeline] End of Pipeline 00:35:31.839 Finished: SUCCESS